Self-Driving Car Engineer Nanodegree

Deep Learning

Project: Build a Traffic Sign Recognition Classifier

In this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary.

Note: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n", "File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.

In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a write up template that can be used to guide the writing process. Completing the code template and writeup template will cover all of the rubric points for this project.

The rubric contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file.

Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.


Step 0: Import required packages

In [1]:
## LIST OF ALL IMPORTS
import os
import csv
import math
import random
import time
import os.path as path
from datetime import datetime

import pickle
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
import cv2
import tensorflow as tf

from sklearn.metrics import confusion_matrix
from sklearn.utils import shuffle
from tensorflow.contrib.layers import flatten
from tensorflow.contrib.learn import monitors
from tensorflow.contrib.metrics import streaming_accuracy, streaming_precision, streaming_recall
from pandas_ml.confusion_matrix import ConfusionMatrix as ConfusionMatrix_pandas

# Visualizations will be shown in the notebook.
%matplotlib inline 

repeat=0 # Binary check to see if an augmented dataset needs to be recreated [0- skip augmentation, 1- augment data].

Step 1: Load Datasets

In [2]:
## LOAD PICKLED DATASET & SPLIT DATA

t0=time.clock() # Obtain run-times.
print("Obtaining datasets.")
# Training, Validation, and Testing data.
training_file='traffic-signs-data/train.p' 
validation_file='traffic-signs-data/valid.p'
testing_file='traffic-signs-data/test.p'
augment_file='traffic-signs-data/augmented_train.p'

with open(training_file, mode='rb') as f:
    train = pickle.load(f)
with open(validation_file, mode='rb') as f:
    valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
    test = pickle.load(f)
    
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']


print("Ensuring equal lengths for features and labels.")
assert(len(X_train)==len(y_train))
assert(len(X_valid)==len(y_valid))
assert(len(X_test)==len(y_test))

# Dataset labels in the German traffic sign dataset.
label_legend='signnames.csv'
arr_classes=pd.read_csv(label_legend,index_col=None).values
arr_classes=arr_classes[:,1]
print("All datasets loaded.")
Obtaining datasets.
Ensuring equal lengths for features and labels.
All datasets loaded.

Step 2: Dataset Summary & Exploration

The pickled data is a dictionary with 4 key/value pairs:

  • 'features' is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).
  • 'labels' is a 1D array containing the label/class id of the traffic sign. The file signnames.csv contains id -> name mappings for each id.
  • 'sizes' is a list containing tuples, (width, height) representing the original width and height the image.
  • 'coords' is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES

Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the pandas shape method might be useful for calculating some of the summary results.

Basic Summary of the Data Set Using Python, Numpy and/or Pandas

In [3]:
## BASIC UNDERSTANDING OF DATASET

# Number of training examples.
n_train = X_train.shape[0]

# Number of validation examples.
n_validation = X_valid.shape[0]

# Number of testing examples.
n_test = X_test.shape[0]

# Shape of an traffic sign image.
image_shape = X_train[1].shape

# Unique classes/labels in the dataset.
n_classes = len(np.unique(y_train))

# Further manipulation.
total_sets=n_train+n_validation+n_test
frac_train=n_train/total_sets
frac_valid=n_validation/total_sets
frac_test=n_test/total_sets

class_list,class_indices,class_counts=np.unique(y_train, return_index=True, return_counts=True)

print("Class indices", class_indices)
print("Class counts", class_counts)


print("Number of training examples =", n_train)
print("Number of validation examples =", n_validation)
print("Number of testing examples =", n_test)
print("There are",total_sets, "datasets, split",round(frac_train,2),"-",\
      round(frac_valid,2),"-",round(frac_test,2),"training, validation, and testing respectively.")
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
Class indices [ 9960  2220 31439  5370  6810 12360 21450 23730 15870 11040 17130  8580
 27329 21810 29219 29909  5010 30449 20370  6630 25950 25680  4500  1770
 10800 33449  1230 10350 26849 10560 25020   210 10140 26250 20010 18930
   900  4830 14010 25410  4200     0  9750]
Class counts [ 180 1980 2010 1260 1770 1650  360 1290 1260 1320 1800 1170 1890 1920  690
  540  360  990 1080  180  300  270  330  450  240 1350  540  210  480  240
  390  690  210  599  360 1080  330  180 1860  270  300  210  210]
Number of training examples = 34799
Number of validation examples = 4410
Number of testing examples = 12630
There are 51839 datasets, split 0.67 - 0.09 - 0.24 training, validation, and testing respectively.
Image data shape = (32, 32, 3)
Number of classes = 43

Exploratory visualization of the dataset

Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc.

The Matplotlib examples and gallery pages are a great resource for doing visualizations in Python.

NOTE: It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections. It can be interesting to look at the distribution of classes in the training, validation and test set. Is the distribution the same? Are there more examples of some classes than others?

Visualizing sample training set images

In [4]:
## DATA EXPLORATION & VISUALIZATION

# Plot traffic sign images.
for Class,Index,Counts in zip(class_list,class_indices,class_counts):
    print("Class {} {} : {} samples.".format(Class,arr_classes[Class],Counts))
    Main=plt.figure(figsize=(10,5))
    choice=random.sample(range(Index,Counts+Index),10)
    for i in range (0,10):
        row=Main.add_subplot(1,10,i+1,xticks=[],yticks=[])
        row.imshow(X_train[choice[i]])
    plt.show()
Class 0 Speed limit (20km/h) : 180 samples.
Class 1 Speed limit (30km/h) : 1980 samples.
Class 2 Speed limit (50km/h) : 2010 samples.
Class 3 Speed limit (60km/h) : 1260 samples.
Class 4 Speed limit (70km/h) : 1770 samples.
Class 5 Speed limit (80km/h) : 1650 samples.
Class 6 End of speed limit (80km/h) : 360 samples.
Class 7 Speed limit (100km/h) : 1290 samples.
Class 8 Speed limit (120km/h) : 1260 samples.
Class 9 No passing : 1320 samples.
Class 10 No passing for vehicles over 3.5 metric tons : 1800 samples.
Class 11 Right-of-way at the next intersection : 1170 samples.
Class 12 Priority road : 1890 samples.
Class 13 Yield : 1920 samples.
Class 14 Stop : 690 samples.
Class 15 No vehicles : 540 samples.
Class 16 Vehicles over 3.5 metric tons prohibited : 360 samples.
Class 17 No entry : 990 samples.
Class 18 General caution : 1080 samples.
Class 19 Dangerous curve to the left : 180 samples.
Class 20 Dangerous curve to the right : 300 samples.
Class 21 Double curve : 270 samples.
Class 22 Bumpy road : 330 samples.
Class 23 Slippery road : 450 samples.
Class 24 Road narrows on the right : 240 samples.
Class 25 Road work : 1350 samples.
Class 26 Traffic signals : 540 samples.
Class 27 Pedestrians : 210 samples.
Class 28 Children crossing : 480 samples.
Class 29 Bicycles crossing : 240 samples.
Class 30 Beware of ice/snow : 390 samples.
Class 31 Wild animals crossing : 690 samples.
Class 32 End of all speed and passing limits : 210 samples.
Class 33 Turn right ahead : 599 samples.
Class 34 Turn left ahead : 360 samples.
Class 35 Ahead only : 1080 samples.
Class 36 Go straight or right : 330 samples.
Class 37 Go straight or left : 180 samples.
Class 38 Keep right : 1860 samples.
Class 39 Keep left : 270 samples.
Class 40 Roundabout mandatory : 300 samples.
Class 41 End of no passing : 210 samples.
Class 42 End of no passing by vehicles over 3.5 metric tons : 210 samples.

Training and Testing set representations

In [5]:
## DATA EXPLORATION & VISUALIZATION

# Histogram plot to identify count of each class.
hist_ytrain_count=np.bincount(y_train)
hist_xtrain_count=len(hist_ytrain_count)

plt.figure(figsize=(12,5))
plt.hist(y_train, hist_xtrain_count, normed=False, facecolor='green',align='mid',rwidth=0.8,alpha=0.75,label='Training set')
plt.hist(y_valid,hist_xtrain_count, normed=False, facecolor='blue',align='mid',rwidth=0.8, alpha=0.75, label='Validation set')
plt.legend(loc='upper right')
plt.xlabel('Class Label')
plt.ylabel('Count')
plt.title(r'Distribution of training and validation dataset images for each class')
plt.axis([0, 43,0,2500])
plt.grid(True)
plt.tight_layout()
plt.show()

plt.figure(figsize=(12,5))
plt.hist(y_train, hist_xtrain_count, normed=False, facecolor='green',align='mid',rwidth=0.8,alpha=0.75,label='Training set')
plt.hist(y_test,hist_xtrain_count, normed=False, facecolor='black',align='mid',rwidth=0.8, alpha=0.75, label='Testing set')
plt.legend(loc='upper right')
plt.xlabel('Class Label')
plt.ylabel('Count')
plt.title(r'Distribution of training and testing dataset images for each class')
plt.axis([0, 43,0,2500])
plt.grid(True)
plt.tight_layout()
plt.show()

Step 3: Design and Test a Model Architecture

Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the German Traffic Sign Dataset.

The LeNet-5 implementation shown in the classroom at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play!

With the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission.

There are various aspects to consider when thinking about this problem:

  • Neural network architecture (is the network over or underfitting?)
  • Play around preprocessing techniques (normalization, rgb to grayscale, etc)
  • Number of examples per label (some have more than others).
  • Generate fake data.

Here is an example of a published baseline model on this problem. It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these.

Augment the Dataset

In [6]:
max_bincount=max(class_counts)
min_bincount=min(class_counts)
print("This step tries to smooth out possible wrong classifications and bias due to varying class lengths as the maximum class size is {} and the minimum is {}.".format(max_bincount,min_bincount))
This step tries to smooth out possible wrong classifications and bias due to varying class lengths as the maximum class size is 2010 and the minimum is 180.

Preprocess the Dataset

The dataset is preprocessed by gray-scale conversion followed by normalizing to have zero mean and equal variance:'(pixel - 128)/ 128', then shuffling the datasets.

In [7]:
## DATA PRE-PROCESS FUNCTIONS
# if (path.exists(augment_file) and repeat==0):
#     print("Preprocessing skipped as the augmented pickle already exists.") 
#     pass
# else:
def grayscale(raw_image): # Convert to grayscale
    return cv2.cvtColor(raw_image,cv2.COLOR_BGR2GRAY)

def normalize(raw_image): # Min-Max normalization
    min_pixel=np.min(raw_image)
    max_pixel=np.max(raw_image)
    return ((raw_image-min_pixel)/(max_pixel-min_pixel))

def equalize(raw_image): # Adaptive Histogram equalization
    clahe=cv2.createCLAHE()
    return clahe.apply(raw_image)

def randomize(dataset,labels):
    permutation=np.random.permutation(labels.shape[0])
    shuffled_dataset=dataset[permutation,:,:]
    shuffled_labels=labels[permutation,:,:]
    return (shuffled_dataset,shuffled_labels)    
In [8]:
if (path.exists(augment_file) and repeat==0):
    print("Preprocessing skipped as the augmented pickle already exists.")
    X_valid_normalized=[]
    X_test_normalized=[]

    
    for image in X_valid:
        valid_image_grayscale=(grayscale(image))
        X_valid_normalized.append(normalize(valid_image_grayscale))

    for image in X_test:
        test_image_grayscale=(grayscale(image))
        X_test_normalized.append(normalize(test_image_grayscale))
    
    X_valid_normalized=np.asarray(X_valid_normalized)
    X_test_normalized=np.asarray(X_test_normalized)
    
    pass
else:
    print("Preprocessing started.")

    X_train_normalized=[]
    X_valid_normalized=[]
    X_test_normalized=[]

    for image in X_train:
        train_image_grayscale=(grayscale(image))
        X_train_normalized.append(normalize(train_image_grayscale))
    
    for image in X_valid:
        valid_image_grayscale=(grayscale(image))
        X_valid_normalized.append(normalize(valid_image_grayscale))

    for image in X_test:
        test_image_grayscale=(grayscale(image))
        X_test_normalized.append(normalize(test_image_grayscale))
        
    X_train_normalized=np.asarray(X_train_normalized)
    X_valid_normalized=np.asarray(X_valid_normalized)
    X_test_normalized=np.asarray(X_test_normalized)
    
    print("Preprocessing: Grayscaling, Histogram Equalization, and Normalizing complete.")
Preprocessing skipped as the augmented pickle already exists.

Augment the Dataset

Increasing class sizes and images for each class in the training set by various image manipulations. Also aims to obtain a balanced representation of classes.

In [9]:
## IMAGE MANIPULATION FUNCTIONS

if (path.exists(augment_file) and repeat==0):
    print("Preprocessing skipped as the augmented pickle already exists.") 
    pass
else:

    def rotation(raw_image,angle_range):
        theta_rot=np.random.uniform(angle_range)-0.5*angle_range
        rows,cols=raw_image.shape
        rot_M=cv2.getRotationMatrix2D((cols/2,rows/2),theta_rot,1)
        return cv2.warpAffine(raw_image,rot_M,(cols,rows))

    def translation(raw_image,translation_range):
        rows,cols=raw_image.shape
        trans_x=translation_range*np.random.uniform()-0.5*translation_range
        trans_y=translation_range*np.random.uniform()-0.5*translation_range
        trans_M=np.float32([[1,0,trans_x],[0,1,trans_y]])
        return cv2.warpAffine(raw_image,trans_M,(cols,rows))

    def shear(raw_image):
        rows,cols=raw_image.shape
        x1=0.2*cols;y1=0.2*rows
        x2=0.8*cols;y2=0.8*rows
        mult_x=(np.random.random(3)-0.5)*cols*(0.05)
        mult_y=(np.random.random(3)-0.5)*rows*(0.05)

        points_1=np.float32([[y1,x1],
                            [y2,x1],
                            [y1,x2]])

        points_2=np.float32([[y1+mult_y[0],x1+mult_x[0]],
                            [y2+mult_y[1],x1+mult_x[1]],
                            [y1+mult_y[2],x2+mult_x[2]]])

        shear_M=cv2.getAffineTransform(points_1,points_2)
        return cv2.warpAffine(raw_image,shear_M,(cols,rows))

    def scale(raw_image):
        rows,cols=raw_image.shape
        scale_factor=np.random.uniform(0.5,5.0)
        return cv2.resize(raw_image,dsize=(32,32),fx=scale_factor,fy=scale_factor,interpolation=cv2.INTER_CUBIC)

    # def contrast(raw_image): # Removed because we are now working with a 1-channel image
    #     rows,cols=raw_image.shape
    #     hsv=cv2.cvtColor(raw_image,cv2.COLOR_BGR2HSV)
    #     h_channel,s_channel,v_channel=cv2.split(hsv)
    #     h_channel=np.add(h_channel,random.uniform(-100,100))
    #     v_channel=np.add(v_channel,random.uniform(-100,100))
    #     merged=np.uint8(np.dstack((h_channel,s_channel,v_channel)))
    #     return cv2.cvtColor(merged,cv2.COLOR_HSV2BGR)

    def flip_secondary(X,Y):
        X_flip_out=np.empty([0,32,32])
        y_flip_out=np.empty([0])
        
        label_flip_vertical=np.array([9,10,11,12,13,15,17,18,21,22,23,25,26,\
                                    27,28,29,30,31,32,35,40,41,42])
        label_flip_horizontal=np.array([1,5,7,9,10,12,15,17,32,38,39,40,41,42])
        label_flip_classes=np.array([[19,20],
                                    [20,19],
                                    [33,34],
                                    [34,33],
                                    [36,37],
                                    [37,36],
                                    [38,39],
                                    [39,38]]) #Source label (original label), target/goal label
        for class_label in range(n_classes):
            
            X_flip_out=np.append(X_flip_out,X[Y==class_label],axis=0)
 
            # Vertical Flip
            if class_label in label_flip_vertical:
                X_flip_out=np.append(X_flip_out,cv2.flip(X[Y==class_label],flipCode=0),axis=0)
            
            extended_length=len(X_flip_out)-len(y_flip_out)
            y_flip_out=np.append(y_flip_out,np.full((extended_length),class_label))
            
            # Horizontal Flip
            if class_label in label_flip_horizontal:
                X_flip_out=np.append(X_flip_out,cv2.flip(X[Y==class_label],flipCode=1),axis=0)
           
            extended_length=len(X_flip_out)-len(y_flip_out)
            y_flip_out=np.append(y_flip_out,np.full((extended_length),class_label))

            # Switch classes and vertical flip
            if class_label in label_flip_classes[:,0]:
                target_class=label_flip_classes[label_flip_classes[:,0]==class_label][0,1]
                target_images_flipped=X[Y==target_class][:,:,::-1]
                X_flip_out=np.append(X_flip_out,target_images_flipped,axis=0)

            extended_length=len(X_flip_out)-len(y_flip_out)
            y_flip_out=np.append(y_flip_out,np.full((extended_length),class_label))

        print("Augmenting images by flipping has been completed.")
        return (X_flip_out,y_flip_out)
Preprocessing skipped as the augmented pickle already exists.
In [10]:
if (path.exists(augment_file) and repeat==0):
    print("Preprocessing skipped as the augmented pickle already exists.") 
    pass
else:
    (X_existing_flipped,y_existing_flipped)=flip_secondary(X_train_normalized,y_train)
   
    print("Flipping augments data from {} entries to {} entries.".format(len(X_train_normalized),len(X_existing_flipped)))
    print("Flipped images array shape:", X_existing_flipped.shape)
    print("Flip complete.")
Preprocessing skipped as the augmented pickle already exists.
In [11]:
if (path.exists(augment_file) and repeat==0):
    print("Preprocessing skipped as the augmented pickle already exists.") 
    pass
else:
    indices_augmented=[]
    X_train_augmented=np.copy(X_existing_flipped)
    y_train_augmented=np.copy(y_existing_flipped)

    balance_threshold=900
    for class_index in range(0,n_classes):
        print("Current image placeholder {}".format(class_index))
        image_index=np.where(y_train==class_index)
        class_size=(np.size(image_index))

        repeat=balance_threshold-class_size
        if class_size<=balance_threshold:
            for i in range(0,repeat):
                if (i%100==0):
                    print("Class Label ", class_index,"--> Class Image Index",i)
                indices_augmented.append(X_train_augmented.shape[0])
                augment_raw_copy=X_train_normalized[image_index[0][i%class_size]]
                a0=rotation(augment_raw_copy,40)
                a1=translation(augment_raw_copy,10)
                a2=shear(augment_raw_copy)
                a3=scale(augment_raw_copy)
                
                X_train_augmented=np.concatenate((X_train_augmented,[a0,a1,a2,a3]),axis=0)
                y_train_augmented=np.concatenate((y_train_augmented,[class_index,class_index,class_index,class_index]),axis=0)

    print("Complete augmenting has been done by linear transform (rotation, translation, shear, scale).")
    
Preprocessing skipped as the augmented pickle already exists.
In [12]:
## PICKLING AUGMENTED DATASET
if (path.exists(augment_file) and repeat==0):
    print("Augmented pickle file already exists, skipping...")
    pass
else:
    augmented_pickle='traffic-signs-data/augmented_train.p'

    try:
        print("Pickling dataset.")
        pickled=open(augmented_pickle,'wb')
        save = {
        'augmented_train_dataset': X_train_augmented,
        'augmented_train_labels': y_train_augmented,
        }
        pickle.dump(save,pickled)
        pickled.close()
        print("Dataset pickled.")
    except:
        print("Error in creating a pickled dataset. Debug.")
    
#     del X_train_augmented
#     del y_train_augmented
#     del X_existing_flipped
#     del y_existing_flipped
Augmented pickle file already exists, skipping...
In [13]:
augment_file='traffic-signs-data/augmented_train.p'

with open(augment_file, mode='rb') as f:
    augment_train = pickle.load(f)

X_train_augmented, y_train_augmented = augment_train['augmented_train_dataset'], augment_train['augmented_train_labels']
print("Using {} dataset.".format(augment_file.split('/')[1]))
print("Shape of augmented features (X_train): ",X_train_augmented.shape)
print("Shape of augmented labels (y_train): ",y_train_augmented.shape)

print("Loaded augmented pickled file.")
Using augmented_train.p dataset.
Shape of augmented features (X_train):  (128192, 32, 32)
Shape of augmented labels (y_train):  (128192,)
Loaded augmented pickled file.
In [14]:
## DATA EXPLORATION & VISUALIZATION
print("Visualizing augmented dataset.")
# Histogram plot to identify count of each class.
hist_ytrain_count=np.bincount(y_train)
hist_xtrain_count=len(hist_ytrain_count)

plt.figure(figsize=(12,5))
plt.hist(y_train_augmented, hist_xtrain_count, normed=False, facecolor='green',align='mid',rwidth=0.8,alpha=0.9,label='Augmented Training set')
plt.hist(y_train,hist_xtrain_count, normed=False, facecolor='black',align='mid',rwidth=0.8, alpha=0.75, label='Original Training set')
plt.legend(loc='upper right')
plt.xlabel('Class Label')
plt.ylabel('Count')
plt.title(r'Distribution of original training and augmented training dataset for each class')
plt.axis([0, 43,0,6000])
plt.grid(True)
plt.tight_layout()
plt.show()
Visualizing augmented dataset.
In [15]:
print("Shuffling and reshaping datasets.")

X_train_shuffle,y_train_shuffle=shuffle(np.asarray(X_train_augmented).reshape(len(X_train_augmented),32,32,1),np.asarray(y_train_augmented).reshape(len(y_train_augmented),))
X_valid_shuffle,y_valid_shuffle=shuffle(np.asarray(X_valid_normalized).reshape(len(X_valid_normalized),32,32,1),np.asarray(y_valid).reshape(len(y_valid),))
X_test_shuffle,y_test_shuffle=shuffle(np.asarray(X_test_normalized).reshape(len(X_test_normalized),32,32,1),np.asarray(y_test).reshape(len(y_test),))

print("Shuffled training datasets shape: ",X_train_shuffle.shape,"& ",y_train_shuffle.shape)
print("Shuffled validation datasets shape: ",X_valid_shuffle.shape,"& ",y_valid_shuffle.shape)
print("Shuffled testing datasets shape: ",X_test_shuffle.shape,"& ",y_test_shuffle.shape)

print("Datasets ready for pruning.")
Shuffling and reshaping datasets.
Shuffled training datasets shape:  (128192, 32, 32, 1) &  (128192,)
Shuffled validation datasets shape:  (4410, 32, 32, 1) &  (4410,)
Shuffled testing datasets shape:  (12630, 32, 32, 1) &  (12630,)
Datasets ready for pruning.
In [16]:
print("Balancing dataset across classes.")

prune_size=2500

class_label,class_counts=np.unique(y_train_shuffle,return_index=False,return_counts=True)

X_train_balanced=np.empty([0,32,32,1])
y_train_balanced=np.empty([0])
for class_index in range(n_classes):

    temp=X_train_shuffle[y_train_shuffle==class_index]
    
    if class_counts[class_index]>=prune_size:
        X_train_balanced=np.append(X_train_balanced,temp[:prune_size],axis=0)
    else:
        accepted_prune_size=class_counts[class_index]
        X_train_balanced=np.append(X_train_balanced,temp[:accepted_prune_size],axis=0)
    extended_length=len(X_train_balanced)-len(y_train_balanced)
    y_train_balanced=np.append(y_train_balanced,np.full((extended_length),class_index))

print("Balanced Datasets ready for the neural network.")
Balancing dataset across classes.
Balanced Datasets ready for the neural network.
In [17]:
# Checking validity of final balanced dataset.
print("Sanity checks on new dataset.\n")

print("X_train_balanced shape :", X_train_balanced.shape)
print("y_train_balanced shape :", y_train_balanced.shape)

## DATA EXPLORATION & VISUALIZATION
print("Visualization of new dataset.")
X_train_temp_balanced=X_train_balanced.reshape(len(X_train_balanced),32,32)

new_class_list,new_class_indices,new_class_counts=np.unique(y_train_balanced, return_index=True, return_counts=True)

# Plot traffic sign images.
for newClass,newIndex,newCounts in zip(new_class_list,new_class_indices,new_class_counts):
    
    print("Class {} {} : {} samples.".format(newClass,arr_classes[int(newClass)],newCounts))
    Main2=plt.figure(figsize=(10,5))
    choice=random.sample(range(newIndex,newCounts+newIndex),10)
    for i in range (0,10):
        row=Main2.add_subplot(1,10,i+1,xticks=[],yticks=[])
        row.imshow(X_train_temp_balanced[choice[i]],cmap='gray')
    plt.show()
# del X_train_temp_balanced
Sanity checks on new dataset.

X_train_balanced shape : (101373, 32, 32, 1)
y_train_balanced shape : (101373,)
Visualization of new dataset.
Class 0.0 Speed limit (20km/h) : 2500 samples.
Class 1.0 Speed limit (30km/h) : 2500 samples.
Class 2.0 Speed limit (50km/h) : 2010 samples.
Class 3.0 Speed limit (60km/h) : 1260 samples.
Class 4.0 Speed limit (70km/h) : 1770 samples.
Class 5.0 Speed limit (80km/h) : 2500 samples.
Class 6.0 End of speed limit (80km/h) : 2500 samples.
Class 7.0 Speed limit (100km/h) : 2500 samples.
Class 8.0 Speed limit (120km/h) : 1260 samples.
Class 9.0 No passing : 2500 samples.
Class 10.0 No passing for vehicles over 3.5 metric tons : 2500 samples.
Class 11.0 Right-of-way at the next intersection : 2340 samples.
Class 12.0 Priority road : 2500 samples.
Class 13.0 Yield : 2500 samples.
Class 14.0 Stop : 1530 samples.
Class 15.0 No vehicles : 2500 samples.
Class 16.0 Vehicles over 3.5 metric tons prohibited : 2500 samples.
Class 17.0 No entry : 2500 samples.
Class 18.0 General caution : 2160 samples.
Class 19.0 Dangerous curve to the left : 2500 samples.
Class 20.0 Dangerous curve to the right : 2500 samples.
Class 21.0 Double curve : 2500 samples.
Class 22.0 Bumpy road : 2500 samples.
Class 23.0 Slippery road : 2500 samples.
Class 24.0 Road narrows on the right : 2500 samples.
Class 25.0 Road work : 2500 samples.
Class 26.0 Traffic signals : 2500 samples.
Class 27.0 Pedestrians : 2500 samples.
Class 28.0 Children crossing : 2500 samples.
Class 29.0 Bicycles crossing : 2500 samples.
Class 30.0 Beware of ice/snow : 2500 samples.
Class 31.0 Wild animals crossing : 2220 samples.
Class 32.0 End of all speed and passing limits : 2500 samples.
Class 33.0 Turn right ahead : 2163 samples.
Class 34.0 Turn left ahead : 2500 samples.
Class 35.0 Ahead only : 2160 samples.
Class 36.0 Go straight or right : 2500 samples.
Class 37.0 Go straight or left : 2500 samples.
Class 38.0 Keep right : 2500 samples.
Class 39.0 Keep left : 2500 samples.
Class 40.0 Roundabout mandatory : 2500 samples.
Class 41.0 End of no passing : 2500 samples.
Class 42.0 End of no passing by vehicles over 3.5 metric tons : 2500 samples.
In [18]:
# Checking validity of final balanced dataset.
print("Sanity checks on new dataset.\n")

print("Balanced dataset sizes :", new_class_counts,"\n")

print("Checking for duplicate images in train-validation-test datasets.")
balanced_train_dataset=X_train_balanced
valid_dataset=X_valid_shuffle
test_dataset=X_test_shuffle

balanced_train_dataset.flags.writeable=False
valid_dataset.flags.writeable=False
test_dataset.flags.writeable=False

train_hash=set([hash(image.tobytes()) for image in balanced_train_dataset])
valid_hash=set([hash(image.tobytes()) for image in valid_dataset])
test_hash=set([hash(image.tobytes()) for image in test_dataset])

train_duplicates=len(balanced_train_dataset)-len(train_hash)
overlap_train_valid=len(set.intersection(train_hash,valid_hash))
overlap_train_test=len(set.intersection(train_hash,test_hash))
overlap_valid_test=len(set.intersection(valid_hash,test_hash))

print("Training set overlap of {} duplicate images".format(train_duplicates))
print("Train-Valid Overlap of {} images".format(overlap_train_valid))
print("Train-Test Overlap of {} images".format(overlap_train_test))
print("Valid-Test Overlap of {} images".format(overlap_valid_test))

# del balanced_train_dataset
# del valid_dataset
# del test_dataset
Sanity checks on new dataset.

Balanced dataset sizes : [2500 2500 2010 1260 1770 2500 2500 2500 1260 2500 2500 2340 2500 2500 1530
 2500 2500 2500 2160 2500 2500 2500 2500 2500 2500 2500 2500 2500 2500 2500
 2500 2220 2500 2163 2500 2160 2500 2500 2500 2500 2500 2500 2500] 

Checking for duplicate images in train-validation-test datasets.
Training set overlap of 22564 duplicate images
Train-Valid Overlap of 0 images
Train-Test Overlap of 0 images
Valid-Test Overlap of 0 images

Model Architecture- Setups

In [19]:
EPOCHS=75
BATCH_SIZE=100

# BASIC HYPERPARAMETERS
mu=0.0
sigma=0.1
In [20]:
# Defining commonly used tensorflow functions

def convolution(layer,kernel,bias):
    # W- Weight [Filter height, Filter width, color_channels, k_output]
    temp=tf.nn.conv2d(layer,kernel,strides=[1,1,1,1],padding='SAME')
    return tf.nn.bias_add(temp,bias)

def full_connected(layer,weight,bias):
    temp=tf.matmul(layer,weight)
    return tf.nn.bias_add(temp,bias)
    
def maxpool(layer):
    return tf.nn.max_pool(layer,ksize=[1,2,2,1],strides=[1,2,2,1],padding='SAME')

def maxpool_3x3(layer):
    return tf.nn.max_pool(layer,ksize=[1,3,3,1],strides=[1,1,1,1],padding='SAME')
    
def relu(layer,name):
    return tf.nn.relu(layer,name=name)

def dropout(layer,keep_prob):
    return tf.nn.dropout(layer,keep_prob)

def evaluate(X_dataset,Y_dataset):
    total_accuracy=0
    data_size=len(X_dataset)
    
    sess=tf.get_default_session()
    for offset in range(0,data_size,BATCH_SIZE):
        batch_x,batch_y=X_dataset[offset:offset+BATCH_SIZE],Y_dataset[offset:offset+BATCH_SIZE]
        local_accuracy=sess.run(accuracy_operation,feed_dict={X:batch_x, Y:batch_y, keep_prob:1.0})
        total_accuracy+=(local_accuracy*len(batch_x))
    return total_accuracy/data_size
In [21]:
X=tf.placeholder(tf.float32,(None,32,32,1))
Y=tf.placeholder(tf.int32,(None))
keep_prob=tf.placeholder(tf.float32)
one_hot_y=tf.one_hot(Y,43)

Architecture One: LeNet-5

def LeNet_init(x): weights={ 'w_conv1': tf.Variable(tf.truncated_normal(shape=[4,4,1,6], mean=mu, stddev=sigma)), 'w_conv2': tf.Variable(tf.truncated_normal(shape=[5,5,6,16], mean=mu, stddev=sigma)), 'w_dense1': tf.Variable(tf.truncated_normal(shape=[400,120], mean=mu, stddev=sigma)), 'w_dense2': tf.Variable(tf.truncated_normal(shape=[120,84], mean=mu, stddev=sigma)), 'w_output': tf.Variable(tf.truncated_normal(shape=[84,43], mean=mu, stddev=sigma)) } biases={ 'b_conv1': tf.Variable(tf.truncated_normal([6])), 'b_conv2': tf.Variable(tf.truncated_normal([16])), 'b_dense1': tf.Variable(tf.truncated_normal([120])), 'b_dense2': tf.Variable(tf.truncated_normal([84])), 'b_output': tf.Variable(tf.truncated_normal([43])) } mpool_filters={ 'filt_conv1': [1,2,2,1], 'filt_conv2': [1,2,2,1] } Padding='VALID' Strides_conv=[1,1,1,1] Strides_pool=[1,2,2,1] # Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6. conv1=tf.nn.conv2d(x,weights['w_conv1'],strides=Strides_conv,padding=Padding) conv1=tf.nn.bias_add(conv1,biases['b_conv1']) conv1=tf.nn.relu(conv1) # Activation. conv1=tf.nn.max_pool(conv1,ksize=mpool_filters['filt_conv1'],strides=Strides_pool,padding=Padding) # Pooling. # Input = 28x28x6. Output = 14x14x6. # Layer 2: Convolutional. Output = 10x10x16. conv2=tf.nn.conv2d(conv1,weights['w_conv2'],strides=Strides_conv,padding=Padding) conv2=tf.nn.bias_add(conv2,biases['b_conv2']) conv2=tf.nn.relu(conv2) # Activation. conv2=tf.nn.max_pool(conv2,ksize=mpool_filters['filt_conv2'],strides=Strides_pool,padding=Padding) # Pooling. # Input = 10x10x16. Output = 5x5x16. # Flatten. Input = 5x5x16. Output = 400. lenet_flat=flatten(conv2) # Layer 3: Fully Connected. Input = 400. Output = 120. dense_1=tf.add(tf.matmul(lenet_flat,weights['w_dense1']),biases['b_dense1']) dense_1=tf.nn.relu(dense_1) # Activation. # full_one=tf.nn.dropout(full_one,keep_prob) # Layer 4: Fully Connected. Input = 120. Output = 84. dense_2=tf.add(tf.matmul(dense_1,weights['w_dense2']),biases['b_dense2']) dense_2=tf.nn.relu(dense_2) # Activation. # full_two=tf.nn.dropout(full_two,keep_prob) # Layer 5: Fully Connected. Input = 84. Output = 43. logits=tf.add(tf.matmul(dense_2,weights['w_output']),biases['b_output']) return logits

Train, Validate and Test the Model: LeNet-5

A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.

learning_rate=0.002 logits_LeNet5=LeNet_init(X) cross_entropy=tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y,logits=logits_LeNet5) loss_operation=tf.reduce_mean(cross_entropy) optimizer=tf.train.AdamOptimizer() # Learning Rate=0.02 with Adam Optimizer training_operation=optimizer.minimize(loss_operation)correct_prediction=tf.equal(tf.argmax(logits_LeNet5,1),tf.argmax(one_hot_y,1)) accuracy_operation=tf.reduce_mean(tf.cast(correct_prediction,tf.float32)) saver=tf.train.Saver()init=tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) data_size=len(X_train_balanced) print("LeNet5 Training in progress...") print() Lenet_start=time.clock() for i in range(EPOCHS): X_train_final,y_train_final=shuffle(X_train_balanced,y_train_balanced) for offset in range(0,data_size,BATCH_SIZE): end=offset+BATCH_SIZE batch_x,batch_y=X_train_final[offset:end],y_train_final[offset:end] _,l=sess.run([training_operation,loss_operation], feed_dict={X:batch_x, Y:batch_y}) training_accuracy=evaluate(X_train_balanced,y_train_balanced) # Metric to also check for overfitting validation_accuracy=evaluate(X_valid_shuffle,y_valid_shuffle) print("EPOCH {}...".format(i+1)) print("Learning Rate of {:.8f}".format(sess.run(optimizer._lr_t))) print("Training batch loss at Epoch {}:{:.5f}".format(i+1, l)) print("Training Accuracy (overfitting-check) of {:.5f}".format(training_accuracy)) print("Validation Accuracy of {:.5f}".format(validation_accuracy)) print() Lenet_end=time.clock() saver.save(sess,'./tf-sessions-data/lenet5_init') print("Lenet Train-Test time {} s.".format(round((Lenet_end-Lenet_start),2))) print("LeNet-5 model trained and saved.")

Architecture Two: VGG-16

EPOCHS=60 BATCH_SIZE=100 # Batch size increased from 64 to 100. # BASIC HYPERPARAMETERS mu=0.0 sigma=0.01X=tf.placeholder(tf.float32,(None,32,32,1)) Y=tf.placeholder(tf.int32,(None)) keep_prob=tf.placeholder(tf.float32) one_hot_y=tf.one_hot(Y,43)def VGG_16(x): with tf.variable_scope("param"): weights={ 'w_conv1_1': tf.Variable(tf.truncated_normal(shape=[3,3,1,16], mean=mu, stddev=sigma),name='w_conv1_1'), 'w_conv1_2': tf.Variable(tf.truncated_normal(shape=[3,3,16,16], mean=mu, stddev=sigma),name='w_conv1_2'), 'w_conv2_1': tf.Variable(tf.truncated_normal(shape=[3,3,16,32], mean=mu, stddev=sigma),name='w_conv2_1'), 'w_conv2_2': tf.Variable(tf.truncated_normal(shape=[3,3,32,32], mean=mu, stddev=sigma),name='w_conv2_2'), 'w_conv3_1': tf.Variable(tf.truncated_normal(shape=[3,3,32,64], mean=mu, stddev=sigma),name='w_conv3_1'), 'w_conv3_2': tf.Variable(tf.truncated_normal(shape=[3,3,64,64], mean=mu, stddev=sigma),name='w_conv3_2'), 'w_conv3_3': tf.Variable(tf.truncated_normal(shape=[3,3,64,64], mean=mu, stddev=sigma),name='w_conv3_3'), 'w_conv4_1': tf.Variable(tf.truncated_normal(shape=[3,3,64,128], mean=mu, stddev=sigma),name='w_conv4_1'), 'w_conv4_2': tf.Variable(tf.truncated_normal(shape=[3,3,128,128], mean=mu, stddev=sigma),name='w_conv4_2'), 'w_conv4_3': tf.Variable(tf.truncated_normal(shape=[3,3,128,512], mean=mu, stddev=sigma),name='w_conv4_3'), 'w_conv5_1': tf.Variable(tf.truncated_normal(shape=[3,3,512,512], mean=mu, stddev=sigma),name='w_conv5_1'), 'w_conv5_2': tf.Variable(tf.truncated_normal(shape=[3,3,512,512], mean=mu, stddev=sigma),name='w_conv5_2'), 'w_conv5_3': tf.Variable(tf.truncated_normal(shape=[3,3,512,512], mean=mu, stddev=sigma),name='w_conv5_3'), 'w_dense_1': tf.Variable(tf.truncated_normal(shape=[512,256], mean=mu, stddev=sigma),name='w_dense_1'), 'w_dense_2': tf.Variable(tf.truncated_normal(shape=[256,256], mean=mu, stddev=sigma),name='w_dense_2'), 'w_output': tf.Variable(tf.truncated_normal(shape=[256,43], mean=mu, stddev=sigma),name='w_output') } biases={ 'b_conv1_1': tf.Variable(tf.truncated_normal(shape=[16]),name='b_conv1_1'), 'b_conv1_2': tf.Variable(tf.truncated_normal(shape=[16]),name='b_conv1_2'), 'b_conv2_1': tf.Variable(tf.truncated_normal(shape=[32]),name='b_conv2_1'), 'b_conv2_2': tf.Variable(tf.truncated_normal(shape=[32]),name='b_conv2_2'), 'b_conv3_1': tf.Variable(tf.truncated_normal(shape=[64]),name='b_conv3_1'), 'b_conv3_2': tf.Variable(tf.truncated_normal(shape=[64]),name='b_conv3_2'), 'b_conv3_3': tf.Variable(tf.truncated_normal(shape=[64]),name='b_conv3_3'), 'b_conv4_1': tf.Variable(tf.truncated_normal(shape=[128]),name='b_conv4_1'), 'b_conv4_2': tf.Variable(tf.truncated_normal(shape=[128]),name='b_conv4_2'), 'b_conv4_3': tf.Variable(tf.truncated_normal(shape=[512]),name='b_conv4_3'), 'b_conv5_1': tf.Variable(tf.truncated_normal(shape=[512]),name='b_conv5_1'), 'b_conv5_2': tf.Variable(tf.truncated_normal(shape=[512]),name='b_conv5_2'), 'b_conv5_3': tf.Variable(tf.truncated_normal(shape=[512]),name='b_conv5_3'), 'b_dense_1': tf.Variable(tf.truncated_normal(shape=[256]),name='b_dense_1'), 'b_dense_2': tf.Variable(tf.truncated_normal(shape=[256]),name='b_dense_1'), 'b_output': tf.Variable(tf.truncated_normal(shape=[43]),name='b_output') } # Block One: Two convolution layers conv1_1=relu(convolution(x,weights['w_conv1_1'],biases['b_conv1_1'])) conv1_2=relu(convolution(conv1_1,weights['w_conv1_2'],biases['b_conv1_1'])) # Block Two: One max pooling layer followed by Two convolution layers pool2=maxpool(conv1_2) conv2_1=relu(convolution(pool2,weights['w_conv2_1'],biases['b_conv2_1'])) conv2_2=relu(convolution(conv2_1,weights['w_conv2_2'],biases['b_conv2_2'])) # Block Three: One max pooling layer followed by Three convolution layers pool3=maxpool(conv2_2) conv3_1=relu(convolution(pool3,weights['w_conv3_1'],biases['b_conv3_1'])) conv3_2=relu(convolution(conv3_1,weights['w_conv3_2'],biases['b_conv3_2'])) conv3_3=relu(convolution(conv3_2,weights['w_conv3_3'],biases['b_conv3_3'])) # Block Four: One max pooling layer followed by Three convolution layers pool4=maxpool(conv3_3) conv4_1=relu(convolution(pool4,weights['w_conv4_1'],biases['b_conv4_1'])) conv4_2=relu(convolution(conv4_1,weights['w_conv4_2'],biases['b_conv4_2'])) conv4_3=relu(convolution(conv4_2,weights['w_conv4_3'],biases['b_conv4_3'])) # Block Five: One max pooling layer followed by Three convolution layers pool5=maxpool(conv4_3) conv5_1=relu(convolution(pool5,weights['w_conv5_1'],biases['b_conv5_1'])) conv5_2=relu(convolution(conv5_1,weights['w_conv5_2'],biases['b_conv5_2'])) conv5_3=relu(convolution(conv5_2,weights['w_conv5_3'],biases['b_conv5_3'])) # Block Six: One max pooling layer followed by Three fully-connected layers pool6=maxpool(conv5_3) vgg_flat=flatten(pool6) dense6_1=relu(full_connected(vgg_flat,weights['w_dense_1'],biases['b_dense_1'])) dense6_2=relu(full_connected(dense6_1,weights['w_dense_2'],biases['b_dense_2'])) logits=full_connected(dense6_2,weights['w_output'],biases['b_output']) return logits

Train, Validate and Test the Model: VGG-16

logits_VGG16=VGG_16(X) cross_entropy=tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y,logits=logits_VGG16) loss_operation=tf.reduce_mean(cross_entropy) # L-2 regularization applied to Momentum Gradient Descent optimizer with intrinsic learning rate decay # L-2 Regularization penalty_term=0.5 vars=tf.trainable_variables() l2_loss_term=tf.add_n([(penalty_term*tf.nn.l2_loss(var)) for var in vars if 'param_1/w_' in var.name]) l2_loss=(loss_operation+l2_loss_term) # Learning Rate Decay global_step=tf.Variable(0) initial_learning_rate=0.001 #.0003,.0005 num_epochs_per_decay=15 learning_rate_decay_factor=0.96 num_batches_per_epoch=int(X_train_balanced.shape[0]/float(BATCH_SIZE)) decay_steps=int(num_batches_per_epoch*num_epochs_per_decay) decayed_learning_rate=tf.train.exponential_decay(initial_learning_rate,global_step,decay_steps,\ learning_rate_decay_factor,staircase=False) # Optimizer # momentum=0.95 # optimizer=tf.train.AdamOptimizer(learning_rate=decayed_learning_rate) # minimizer=optimizer.minimize(l2_loss,global_step=global_step) optimizer=tf.train.AdamOptimizer() minimizer=optimizer.minimize(l2_loss)correct_prediction=tf.equal(tf.argmax(logits_VGG16,1),tf.argmax(one_hot_y,1)) accuracy_operation=tf.reduce_mean(tf.cast(correct_prediction,tf.float32)) saver=tf.train.Saver()init=tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) data_size=len(X_train_balanced) print("VGG-16 Training in progress...") print() VGG_start=time.clock() for i in range(EPOCHS): X_train_final,y_train_final=shuffle(X_train_balanced,y_train_balanced) for offset in range(0,data_size,BATCH_SIZE): end=offset+BATCH_SIZE batch_x,batch_y=X_train_final[offset:end],y_train_final[offset:end] _,l=sess.run([minimizer,l2_loss], feed_dict={X:batch_x, Y:batch_y, keep_prob:0.5}) training_accuracy=evaluate(X_train_balanced,y_train_balanced) # Metric to also check for overfitting validation_accuracy=evaluate(X_valid_shuffle,y_valid_shuffle) print("EPOCH {}...".format(i+1)) print("Learning Rate of {:.8f}".format(sess.run(optimizer._lr_t))) print("Training batch loss at Epoch {}: {:.5f}".format(i+1, l)) print("Training Accuracy (overfitting-check) of {:.5f}".format(training_accuracy)) print("Validation Accuracy of {:.5f}".format(validation_accuracy)) print() VGG_end=time.clock() saver.save(sess,'./tf-sessions-data/vgg16_02') print("VGG-16 Train-Test time {} s.".format(round((VGG_end-VGG_start),2))) print("VGG-16 model trained and saved.")with tf.Session() as sess: sess.run(tf.global_variables_initializer()) saver_test=tf.train.import_meta_graph('./tf-sessions-data/vgg16_02.meta') saver_test.restore(sess,'./tf-sessions-data/vgg16_02') # saver.restore(sess,'./tf-sessions-data/vgg16_02') # y_test_predict=sess.run(tf.argmax(logits,1), feed_dict={X: X_test_shuffle, keep_prob: 1.0}) test_accuracy=evaluate(X_test_shuffle,y_test_shuffle) print("Test Accuracy on VGG-15 network:", round(test_accuracy,4))

Architecture Three: Repurposed Barebones Model

EPOCHS=10 BATCH_SIZE=64 # BASIC HYPERPARAMETERS mu=0.0 sigma=0.1X=tf.placeholder(tf.float32,(None,32,32,1)) Y=tf.placeholder(tf.int32,(None)) keep_prob=tf.placeholder(tf.float32) one_hot_y=tf.one_hot(Y,43) sess=tf.InteractiveSession()def inception_module_naive(layer,color_channels,kernel_output): mu=0 sigma=0.1 # 1*1 Convolution Layer filter_1_1=tf.Variable(tf.truncated_normal([1,1,color_channels,kernel_output], mu, sigma)) block_conv1=tf.nn.conv2d(layer,filter_1_1,strides=[1,1,1,1], padding='SAME') # 3*3 Convolution Layer filter_3_1=tf.Variable(tf.truncated_normal([3,3,color_channels,kernel_output], mu, sigma)) block_conv3=tf.nn.conv2d(layer,filter_3_1,strides=[1,1,1,1], padding='SAME') # 5*5 Convolution Layer filter_5_1=tf.Variable(tf.truncated_normal([5,5,color_channels,kernel_output], mu, sigma)) block_conv5=tf.nn.conv2d(layer,filter_5_1,strides=[1,1,1,1], padding='SAME') # Average Pooling followed by 1*1 Convolution Layer average_pool=tf.nn.avg_pool(layer,ksize=[1,2,2,1],strides=[1,2,2,1], padding='SAME') filter_1_2=tf.Variable(tf.truncated_normal([1,1,color_channels,kernel_output],mu,sigma)) block_avgpool_conv1=tf.nn.conv2d(average_pool,filter_1_2,strides=[1,1,1,1], padding='SAME') # Concatenating layers bias=tf.Variable(tf.truncated_normal([color_channels+3*kernel_output], mu, sigma)) x=tf.concat([block_conv1,block_conv3,block_conv5,block_avgpool],axis=3) x=tf.nn.bias_add(x,bias) return tf.nn.relu(x)def inception_module_sequence1(layer,input_fmap,output_fmap): mu=0 sigma=0.1 def convolution(layer,weight,bias): # W- Weight [Filter height, Filter width, color_channels, k_output] temp=tf.nn.conv2d(layer,weight,strides=[1,1,1,1],padding='SAME') return tf.nn.bias_add(temp,bias) def maxpool_3x3(layer): return tf.nn.max_pool(layer,ksize=[1,3,3,1],strides=[1,1,1,1],padding='SAME') def relu(layer): return tf.nn.relu(layer) # Weights,Biases # Three 1*1 Convolution Layers- Block One reduced_layer=tf.cast(0.5*output_fmap,dtype=tf.int32) b1_conv_weight_1x1_1=tf.Variable(tf.truncated_normal([1,1,input_fmap,output_fmap],mu,sigma),name='b1_conv_weight_1x1_1') b1_conv_bias_1x1_1=tf.Variable(tf.zeros([output_fmap]),name='b1_conv_bias_1x1_1') b1_conv_weight_1x1_2=tf.Variable(tf.truncated_normal([1,1,input_fmap,reduced_layer],mu,sigma),name='b1_conv_weight_1x1_2') b1_conv_bias_1x1_2=tf.Variable(tf.zeros([reduced_layer]),name='b1_conv_bias_1x1_2') b1_conv_weight_1x1_3=tf.Variable(tf.truncated_normal([1,1,input_fmap,reduced_layer],mu,sigma),name='b1_conv_weight_1x1_3') b1_conv_bias_1x1_3=tf.Variable(tf.zeros([reduced_layer]),name='b1_conv_bias_1x1_3') # 3*3 Convolution Layer that follows b_1_conv_2- Block Two b2_conv_weight_3x3=tf.Variable(tf.truncated_normal([3,3,reduced_layer,output_fmap],mu,sigma),name='b2_conv_weight_3x3') b2_conv_bias_3x3=tf.Variable(tf.zeros([output_fmap]),name='b2_conv_bias_3x3') # 5*5 Convolution Layer that follows b1_conv_3- Block Two b2_conv_weight_5x5=tf.Variable(tf.truncated_normal([5,5,reduced_layer,output_fmap],mu,sigma),name='b2_conv_weight_5x5') b2_conv_bias_5x5=tf.Variable(tf.zeros([output_fmap]),name='b2_conv_bias_5x5') # 1*1 Convolution Layer that follows a 3*3 maxpool layer- Block Two b2_conv_weight_1x1=tf.Variable(tf.truncated_normal([1,1,reduced_layer,output_fmap],mu,sigma),name='b2_conv_weight_1x1') b2_conv_bias_1x1=tf.Variable(tf.zeros([output_fmap]),name='b2_conv_bias_1x1') # Fitting blocks # Parallel sub-modules in Block One # with tf.name_scope("Inception_SubBlock_One") as scope: b1_conv1_1x1=convolution(layer,b1_conv_weight_1x1_1,b1_conv_bias_1x1_1) b1_conv2_1x1=relu(convolution(layer,b1_conv_weight_1x1_2,b1_conv_bias_1x1_2)) b1_conv3_1x1=relu(convolution(layer,b1_conv_weight_1x1_3,b1_conv_bias_1x1_2)) b1_maxpool_3x3=maxpool_3x3(layer) # Parallel sub-modules in Block Two # 3*3 convolution connected to the second 1*1 convolution in Block One # with tf.name_scope("Inception_SubBlock_Two") as scope: b2_conv1_3x3=convolution(b1_conv2_1x1,b2_conv_weight_3x3,b2_conv_bias_3x3) # 5*5 convolution connected to the third 1*1 convolution in Block One b2_conv1_5x5=convolution(b1_conv3_1x1,b2_conv_weight_5x5,b2_conv_bias_5x5) # 1*1 convolution connected to the max pool layer in Block One b2_conv1_1x1=convolution(b1_maxpool_3x3,b2_conv_weight_1x1,b2_conv_bias_1x1) # Concatenating modules b1_conv1_1x1 in Block One and all the modules in Block Two # with tf.name_scope("Inception_concatenate") as scope: b3_concat=tf.concat([b1_conv1_1x1,b2_conv1_3x3,b2_conv1_5x5,b2_conv1_1x1],3) inception=relu(b3_concat) return inceptiondef max_out(inputs, num_units, axis=None): shape = inputs.get_shape().as_list() if shape[0] is None: shape[0] = -1 if axis is None: # Assume that channel is the last dimension axis = -1 num_channels = shape[axis] if num_channels % num_units: raise ValueError('number of features({}) is not ' 'a multiple of num_units({})'.format(num_channels, num_units)) shape[axis] = num_units shape += [num_channels // num_units] outputs = tf.reduce_max(tf.reshape(inputs, shape), -1, keep_dims=False) return outputs def Barebones_inception(x): weights={ 'w_conv_2x2_1': tf.Variable(tf.truncated_normal(shape=[2,2,1,6], mean=mu, stddev=sigma)), 'w_conv_2x2_2': tf.Variable(tf.truncated_normal(shape=[5,5,6,16], mean=mu, stddev=sigma)), 'w_conv_1x1_1': tf.Variable(tf.truncated_normal(shape=[1,1,6,32], mean=mu, stddev=sigma)), 'w_dense1': tf.Variable(tf.truncated_normal(shape=[65536,2048], mean=mu, stddev=sigma)), 'w_dense2': tf.Variable(tf.truncated_normal(shape=[2048,128], mean=mu, stddev=sigma)), 'w_output': tf.Variable(tf.truncated_normal(shape=[128,43], mean=mu, stddev=sigma)) } biases={ 'b_conv_2x2_1': tf.Variable(tf.truncated_normal([6])), 'b_conv_2x2_2': tf.Variable(tf.truncated_normal([16])), 'b_conv_1x1_1': tf.Variable(tf.truncated_normal([32])), 'b_dense1': tf.Variable(tf.truncated_normal([2048])), 'b_dense2': tf.Variable(tf.truncated_normal([128])), 'b_output': tf.Variable(tf.truncated_normal([43])) } # Stem # with tf.name_scope("Conv_Stem_1") as scope: conv1_1=convolution(x,weights['w_conv_2x2_1'],biases['b_conv_2x2_1']) conv1_1_activ=max_out(conv1_1,6) # with tf.name_scope("Pool_Stem_1") as scope: pool1_1=maxpool_3x3(conv1_1_activ) # with tf.name_scope("Conv_Stem_2") as scope: conv1_2=convolution(pool1_1,weights['w_conv_2x2_2'],biases['b_conv_2x2_2']) conv1_2_activ=max_out(conv1_2,8) # with tf.name_scope("Pool_Stem_2") as scope: pool1_2=maxpool_3x3(conv1_1_activ) # with tf.name_scope("Conv_Stem_3") as scope: conv1_3=convolution(pool1_2,weights['w_conv_1x1_1'],biases['b_conv_1x1_1']) conv1_3_activ=max_out(conv1_3,8) # with tf.name_scope("Pool_Stem_3") as scope: pool1_3=maxpool_3x3(conv1_3_activ) # Body- Two inception modules that already have a RELU activation units # with tf.name_scope("Inception_1") as scope: inception_s1=inception_module_sequence1(pool1_3,8,16) # with tf.name_scope("Inception_2") as scope: # inception_s2=inception_module_sequence1(inception_s1,64,128) # Ignoring a second inception module because of computation time even on an AWS instance. # Root # with tf.name_scope("Flatten") as scope: barebones_flat=flatten(inception_s1) # with tf.name_scope("FullConnected_1") as scope: dense3_1=full_connected(barebones_flat,weights['w_dense1'],biases['b_dense1']) dense3_1_activ=max_out(dense3_1,2048) # with tf.name_scope("Dropout_1") as scope: dense3_1_dropout=dropout(dense3_1_activ,keep_prob) # with tf.name_scope("FullConnected_2") as scope: dense3_2=full_connected(dense3_1_dropout,weights['w_dense2'],biases['b_dense2']) dense3_2_activ=max_out(dense3_2,128) # with tf.name_scope("Dropout_2") as scope: dense3_2_dropout=dropout(dense3_2_activ,keep_prob) # with tf.name_scope("Logits") as scope: logits=full_connected(dense3_2_dropout,weights['w_output'],biases['b_output']) return logits

Train, Validate and Test the Model: Barebones inception architecture

learning_rate=0.0005 # with tf.name_scope("EntropyCost") as scope: logits_bareinception=Barebones_inception(X) cross_entropy=tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y,logits=logits_bareinception) loss=tf.reduce_mean(cross_entropy) # L-2 Regularization penalty_term=0.25 vars=tf.trainable_variables() l2_loss_term=tf.add_n([tf.nn.l2_loss(var) for var in vars if '_conv_weight_' in var.name]) loss_operation=(loss+penalty_term*l2_loss_term) optimizer=tf.train.AdamOptimizer(learning_rate=learning_rate) training_operation=optimizer.minimize(loss_operation)# with tf.name_scope("Evaluate") as scope: correct_prediction=tf.equal(tf.argmax(logits_bareinception,1),tf.argmax(one_hot_y,1)) accuracy_operation=tf.reduce_mean(tf.cast(correct_prediction,tf.float32)) # accuracy_summary=tf.summary.scalar("accuracy",accuracy_operation) saver=tf.train.Saver()# Creating a graph # merged=tf.merge_all_summaries/() # writer=tf.train.SummaryWriter('./graph-logs',sess.graph_def) merged=tf.summary.merge_all() writer=tf.summary.FileWriter('./graph-logs',sess.graph_def)init=tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) data_size=len(X_train_balanced) print("Simplified Inception model Training in progress...") print() Inception_start=time.clock() # Logging data Epochs=[] Training_losses=[] Training_accuracies=[] Validation_accuracies=[] for i in range(EPOCHS): Epoch_time=time.clock() X_train_final,y_train_final=shuffle(X_train_balanced,y_train_balanced) for offset in range(0,data_size,BATCH_SIZE): end=offset+BATCH_SIZE batch_x,batch_y=X_train_final[offset:end],y_train_final[offset:end] # _,l,summary_str=sess.run([training_operation, loss_operation, merged], feed_dict={X:batch_x, Y:batch_y, keep_prob:0.50}) _,l=sess.run([training_operation, loss_operation], feed_dict={X:batch_x, Y:batch_y, keep_prob:0.50}) training_accuracy=evaluate(X_train_balanced,y_train_balanced) validation_accuracy=evaluate(X_valid_shuffle,y_valid_shuffle) # writer.add_summary(summary_str,i) # writer.flush() print("EPOCH {} with forward-backward propagation time of {}s".format(i+1,round(time.clock()-Epoch_time,3))) print("Learning Rate of {:.8f}".format(sess.run(optimizer._lr_t))) print("Training batch loss at Epoch {}: {:.5f}".format(i+1, l)) print("Training Accuracy of {:.5f}".format(training_accuracy)) print("Validation Accuracy of {:.5f}".format(validation_accuracy)) print() # Logging data Epochs.append(i) Training_losses.append(l) Training_accuracies.append(training_accuracy) Validation_accuracies.append(validation_accuracy) Inception_end=time.clock() saver.save(sess,'./tf-sessions-data/bbinception_init') print("Barebones-Inception architecture Train-Test time {}s.".format(round((Inception_end-Inception_start),2))) print("Barebones-Inception model trained and saved.")

Architecture Four: Simplified CNN Model

In [22]:
EPOCHS=75
BATCH_SIZE=64

# BASIC HYPERPARAMETERS
mu=0.0
sigma=0.1
# MISC
save_path='./tf-sessions-data/simplecnn_m2_e75_lr100'
In [23]:
sess=tf.InteractiveSession()
In [24]:
def Simple_CNN(x):
#     with tf.variable_scope("param"):
    weights={
        'W_conv1': tf.Variable(tf.truncated_normal(shape=(5,5,1,16), mean=mu, stddev=sigma), name='W_conv1'),
        'W_conv2': tf.Variable(tf.truncated_normal(shape=(3,3,16,32), mean=mu, stddev=sigma), name='W_conv2'),
        'W_conv3': tf.Variable(tf.truncated_normal(shape=(3,3,32,64), mean=mu, stddev=sigma), name='W_conv2'),
        
        'W_dense1': tf.Variable(tf.truncated_normal(shape=(1024,1024), mean=mu, stddev=sigma), name='W_dense1'),
        'W_dense2': tf.Variable(tf.truncated_normal(shape=(1024,512), mean=mu, stddev=sigma), name='W_dense2'),
        'W_dense3': tf.Variable(tf.truncated_normal(shape=(512,256), mean=mu, stddev=sigma), name='W_dense3'),
        'W_output': tf.Variable(tf.truncated_normal(shape=(256,43), mean=mu, stddev=sigma), name='W_output') 
    }
    
    biases={
        'b_conv1': tf.Variable(tf.truncated_normal([16])),
        'b_conv2': tf.Variable(tf.truncated_normal([32])),
        'b_conv3': tf.Variable(tf.truncated_normal([64])),
       
        'b_dense1': tf.Variable(tf.truncated_normal([1024])),
        'b_dense2': tf.Variable(tf.truncated_normal([512])),
        'b_dense3': tf.Variable(tf.truncated_normal([256])),
        'b_output': tf.Variable(tf.truncated_normal([43]))
    }
    
    # Layer One: Convolution
    with tf.name_scope("Convolution_Layer_1") as scope:
        conv1=convolution(x,weights['W_conv1'],biases['b_conv1'])
        # Layer One: Activation
        conv1=relu(conv1,'activation_1')
    # Layer One: Max-Pooling
    with tf.name_scope("MaxPool_1") as scope:
        pool1=maxpool(conv1)
        
    # Layer Two: Convolution
    with tf.name_scope("Convolution_Layer_2") as scope:
        conv2=convolution(pool1,weights['W_conv2'],biases['b_conv2'])
        # Layer Two: Activation
        conv2=relu(conv2,'activation_2')
    # Layer Two: Max-Pooling
    with tf.name_scope("MaxPool_2") as scope:
        pool2=maxpool(conv2)
        
    # Layer Three: Convolution
    with tf.name_scope("Convolution_Layer_3") as scope:
        conv3=convolution(pool2,weights['W_conv3'],biases['b_conv3'])
        # Layer Two: Activation
        conv3=relu(conv3,'activation_3')
    # Layer Two: Max-Pooling
    with tf.name_scope("MaxPool_3") as scope:
        pool3=maxpool(conv3)
        
    # Flatten Layer
    with tf.name_scope("Flatten_Layer") as scope:
        flat=flatten(pool3)

    # Layer Three: Full Connected
    with tf.name_scope("Dense_Layer_1") as scope:
        dense1=full_connected(flat,weights['W_dense1'],biases['b_dense1'])
        # Layer Three: Activation
        dense1=relu(dense1,'activation_4')
    # Layer Three: Dropout
    with tf.name_scope("Dropout_1") as scope:
        dense1_dropout=dropout(dense1,keep_prob)

    # Layer Four: Full Connected
    with tf.name_scope("Dense_Layer_2") as scope:
        dense2=full_connected(dense1_dropout,weights['W_dense2'],biases['b_dense2'])
        # Layer Four: Activation
        dense2=relu(dense2,'activation_5')
    # Layer Four: Dropout
    with tf.name_scope("Dropout_2") as scope:
        dense2_dropout=dropout(dense2,keep_prob)

    # Layer Five: Full Connected
    with tf.name_scope("Dense_Layer_3") as scope:
        dense3=full_connected(dense2_dropout,weights['W_dense3'],biases['b_dense3'])
        # Layer Five: Activation
        dense3=relu(dense3,'activation_6')
     
    # Layer Six: Full Connected
    with tf.name_scope("Output_Layer") as scope:
        logits=full_connected(dense3,weights['W_output'],biases['b_output'])

    return logits 

Train, Validate and Test the Model:Simple convolutoin network architecture

In [25]:
with tf.name_scope("EntropyCost") as scope:
    logits_cnn=Simple_CNN(X)
    cross_entropy=tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y,logits=logits_cnn)
    loss=tf.reduce_mean(cross_entropy)
    
# L-2 Regularization
penalty_term=1e-6

vars=tf.trainable_variables()
l2_loss_term=tf.add_n([tf.nn.l2_loss(var) for var in vars if 'W_' in var.name])
loss_operation=(loss+penalty_term*l2_loss_term)
    
optimizer=tf.train.AdamOptimizer()
training_operation=optimizer.minimize(loss_operation)
In [26]:
with tf.name_scope("Evaluate") as scope:
    correct_prediction=tf.equal(tf.argmax(logits_cnn,1),tf.argmax(one_hot_y,1))
    accuracy_operation=tf.reduce_mean(tf.cast(correct_prediction,tf.float32))
    accuracy_summary=tf.summary.scalar("accuracy",accuracy_operation)
    saver=tf.train.Saver()
In [27]:
# Creating a graph
# merged=tf.merge_all_summaries/()
# writer=tf.train.SummaryWriter('./graph-logs',sess.graph_def)

# merged=tf.summary.merge_all()
# writer=tf.summary.FileWriter('./graph-logs',sess.graph_def)
In [28]:
init=tf.global_variables_initializer()
with tf.Session() as sess:
    sess.run(init)
    data_size=len(X_train_balanced)
    
    print("Convolution Neural Network Training in progress...")
    print()
    CNNarchitecture_start=time.clock()
    
    # Logging data
    Epochs=[]
    Training_losses=[]
    Training_accuracies=[]
    Validation_accuracies=[]
    
    for i in range(EPOCHS):
        Epoch_time=time.clock()
        X_train_final,y_train_final=shuffle(X_train_balanced,y_train_balanced)
        
        for offset in range(0,data_size,BATCH_SIZE):
            end=offset+BATCH_SIZE
            batch_x,batch_y=X_train_final[offset:end],y_train_final[offset:end]
#             _,l,summary_str=sess.run([training_operation, loss_operation, merged], feed_dict={X:batch_x, Y:batch_y, keep_prob:0.50})
            _,l=sess.run([training_operation, loss], feed_dict={X:batch_x, Y:batch_y, keep_prob:0.50})
        
        training_accuracy=evaluate(X_train_balanced,y_train_balanced)
        validation_accuracy=evaluate(X_valid_shuffle,y_valid_shuffle)
        
#         writer.add_summary(summary_str,i)
#         writer.flush()
        
        print("EPOCH {} with forward-backward propagation time of {}s".format(i+1,round(time.clock()-Epoch_time,3)))
        print("Learning Rate of {:.8f}".format(sess.run(optimizer._lr_t)))
        print("Training batch loss at Epoch {}: {:.5f}".format(i+1, l))
        print("Training Accuracy of {:.5f}".format(training_accuracy))
        print("Validation Accuracy of {:.5f}".format(validation_accuracy))
        print()
        
        # Logging data
        Epochs.append(i)
        Training_losses.append(l)
        Training_accuracies.append(training_accuracy)
        Validation_accuracies.append(validation_accuracy) 
    
    CNNarchitecture_end=time.clock()    
    saver.save(sess,save_path)
    print("Simplfied CNN architecture Train-Test time {}s.".format(round((CNNarchitecture_end-CNNarchitecture_start),2)))
    print("Simplified CNN model trained and saved.")
Convolution Neural Network Training in progress...

EPOCH 1 with forward-backward propagation time of 31.503s
Learning Rate of 0.00100000
Training batch loss at Epoch 1: 0.91458
Training Accuracy of 0.77798
Validation Accuracy of 0.65737

EPOCH 2 with forward-backward propagation time of 30.923s
Learning Rate of 0.00100000
Training batch loss at Epoch 2: 0.52883
Training Accuracy of 0.93447
Validation Accuracy of 0.84989

EPOCH 3 with forward-backward propagation time of 31.097s
Learning Rate of 0.00100000
Training batch loss at Epoch 3: 0.28881
Training Accuracy of 0.96640
Validation Accuracy of 0.88141

EPOCH 4 with forward-backward propagation time of 31.05s
Learning Rate of 0.00100000
Training batch loss at Epoch 4: 0.19330
Training Accuracy of 0.97766
Validation Accuracy of 0.89569

EPOCH 5 with forward-backward propagation time of 31.047s
Learning Rate of 0.00100000
Training batch loss at Epoch 5: 0.13464
Training Accuracy of 0.97865
Validation Accuracy of 0.90748

EPOCH 6 with forward-backward propagation time of 31.082s
Learning Rate of 0.00100000
Training batch loss at Epoch 6: 0.19265
Training Accuracy of 0.98766
Validation Accuracy of 0.91429

EPOCH 7 with forward-backward propagation time of 31.046s
Learning Rate of 0.00100000
Training batch loss at Epoch 7: 0.09348
Training Accuracy of 0.98664
Validation Accuracy of 0.93424

EPOCH 8 with forward-backward propagation time of 30.886s
Learning Rate of 0.00100000
Training batch loss at Epoch 8: 0.02572
Training Accuracy of 0.99295
Validation Accuracy of 0.93175

EPOCH 9 with forward-backward propagation time of 30.934s
Learning Rate of 0.00100000
Training batch loss at Epoch 9: 0.06734
Training Accuracy of 0.99353
Validation Accuracy of 0.94127

EPOCH 10 with forward-backward propagation time of 30.959s
Learning Rate of 0.00100000
Training batch loss at Epoch 10: 0.02591
Training Accuracy of 0.99624
Validation Accuracy of 0.93492

EPOCH 11 with forward-backward propagation time of 30.887s
Learning Rate of 0.00100000
Training batch loss at Epoch 11: 0.01062
Training Accuracy of 0.99571
Validation Accuracy of 0.94717

EPOCH 12 with forward-backward propagation time of 30.969s
Learning Rate of 0.00100000
Training batch loss at Epoch 12: 0.10093
Training Accuracy of 0.99376
Validation Accuracy of 0.92630

EPOCH 13 with forward-backward propagation time of 30.859s
Learning Rate of 0.00100000
Training batch loss at Epoch 13: 0.03949
Training Accuracy of 0.99398
Validation Accuracy of 0.93855

EPOCH 14 with forward-backward propagation time of 30.919s
Learning Rate of 0.00100000
Training batch loss at Epoch 14: 0.01348
Training Accuracy of 0.99682
Validation Accuracy of 0.94467

EPOCH 15 with forward-backward propagation time of 30.974s
Learning Rate of 0.00100000
Training batch loss at Epoch 15: 0.18451
Training Accuracy of 0.99779
Validation Accuracy of 0.93401

EPOCH 16 with forward-backward propagation time of 30.937s
Learning Rate of 0.00100000
Training batch loss at Epoch 16: 0.14611
Training Accuracy of 0.99772
Validation Accuracy of 0.93946

EPOCH 17 with forward-backward propagation time of 30.961s
Learning Rate of 0.00100000
Training batch loss at Epoch 17: 0.20113
Training Accuracy of 0.99696
Validation Accuracy of 0.94399

EPOCH 18 with forward-backward propagation time of 31.006s
Learning Rate of 0.00100000
Training batch loss at Epoch 18: 0.00035
Training Accuracy of 0.99767
Validation Accuracy of 0.94807

EPOCH 19 with forward-backward propagation time of 31.163s
Learning Rate of 0.00100000
Training batch loss at Epoch 19: 0.01197
Training Accuracy of 0.99870
Validation Accuracy of 0.95215

EPOCH 20 with forward-backward propagation time of 31.222s
Learning Rate of 0.00100000
Training batch loss at Epoch 20: 0.12433
Training Accuracy of 0.99735
Validation Accuracy of 0.95329

EPOCH 21 with forward-backward propagation time of 31.166s
Learning Rate of 0.00100000
Training batch loss at Epoch 21: 0.00016
Training Accuracy of 0.99839
Validation Accuracy of 0.95624

EPOCH 22 with forward-backward propagation time of 30.822s
Learning Rate of 0.00100000
Training batch loss at Epoch 22: 0.01641
Training Accuracy of 0.99808
Validation Accuracy of 0.95601

EPOCH 23 with forward-backward propagation time of 30.878s
Learning Rate of 0.00100000
Training batch loss at Epoch 23: 0.22609
Training Accuracy of 0.99681
Validation Accuracy of 0.94649

EPOCH 24 with forward-backward propagation time of 30.971s
Learning Rate of 0.00100000
Training batch loss at Epoch 24: 0.02588
Training Accuracy of 0.99840
Validation Accuracy of 0.94921

EPOCH 25 with forward-backward propagation time of 31.134s
Learning Rate of 0.00100000
Training batch loss at Epoch 25: 0.05620
Training Accuracy of 0.99882
Validation Accuracy of 0.95465

EPOCH 26 with forward-backward propagation time of 31.253s
Learning Rate of 0.00100000
Training batch loss at Epoch 26: 0.00471
Training Accuracy of 0.99860
Validation Accuracy of 0.95397

EPOCH 27 with forward-backward propagation time of 31.011s
Learning Rate of 0.00100000
Training batch loss at Epoch 27: 0.07035
Training Accuracy of 0.99778
Validation Accuracy of 0.94626

EPOCH 28 with forward-backward propagation time of 30.959s
Learning Rate of 0.00100000
Training batch loss at Epoch 28: 0.06827
Training Accuracy of 0.99904
Validation Accuracy of 0.96009

EPOCH 29 with forward-backward propagation time of 31.063s
Learning Rate of 0.00100000
Training batch loss at Epoch 29: 0.02601
Training Accuracy of 0.99889
Validation Accuracy of 0.95306

EPOCH 30 with forward-backward propagation time of 31.108s
Learning Rate of 0.00100000
Training batch loss at Epoch 30: 0.01594
Training Accuracy of 0.99704
Validation Accuracy of 0.94762

EPOCH 31 with forward-backward propagation time of 31.14s
Learning Rate of 0.00100000
Training batch loss at Epoch 31: 0.00036
Training Accuracy of 0.99895
Validation Accuracy of 0.95556

EPOCH 32 with forward-backward propagation time of 31.157s
Learning Rate of 0.00100000
Training batch loss at Epoch 32: 0.00162
Training Accuracy of 0.99876
Validation Accuracy of 0.96077

EPOCH 33 with forward-backward propagation time of 30.932s
Learning Rate of 0.00100000
Training batch loss at Epoch 33: 0.21287
Training Accuracy of 0.99936
Validation Accuracy of 0.96327

EPOCH 34 with forward-backward propagation time of 31.011s
Learning Rate of 0.00100000
Training batch loss at Epoch 34: 0.00799
Training Accuracy of 0.99905
Validation Accuracy of 0.96145

EPOCH 35 with forward-backward propagation time of 31.034s
Learning Rate of 0.00100000
Training batch loss at Epoch 35: 0.10131
Training Accuracy of 0.99855
Validation Accuracy of 0.95714

EPOCH 36 with forward-backward propagation time of 30.903s
Learning Rate of 0.00100000
Training batch loss at Epoch 36: 0.03298
Training Accuracy of 0.99885
Validation Accuracy of 0.95488

EPOCH 37 with forward-backward propagation time of 31.015s
Learning Rate of 0.00100000
Training batch loss at Epoch 37: 0.00068
Training Accuracy of 0.99876
Validation Accuracy of 0.95488

EPOCH 38 with forward-backward propagation time of 31.025s
Learning Rate of 0.00100000
Training batch loss at Epoch 38: 0.06413
Training Accuracy of 0.99881
Validation Accuracy of 0.94671

EPOCH 39 with forward-backward propagation time of 30.955s
Learning Rate of 0.00100000
Training batch loss at Epoch 39: 0.00873
Training Accuracy of 0.99889
Validation Accuracy of 0.95850

EPOCH 40 with forward-backward propagation time of 31.038s
Learning Rate of 0.00100000
Training batch loss at Epoch 40: 0.00061
Training Accuracy of 0.99896
Validation Accuracy of 0.95918

EPOCH 41 with forward-backward propagation time of 30.935s
Learning Rate of 0.00100000
Training batch loss at Epoch 41: 0.14197
Training Accuracy of 0.99956
Validation Accuracy of 0.96553

EPOCH 42 with forward-backward propagation time of 31.157s
Learning Rate of 0.00100000
Training batch loss at Epoch 42: 0.02533
Training Accuracy of 0.99944
Validation Accuracy of 0.95805

EPOCH 43 with forward-backward propagation time of 30.941s
Learning Rate of 0.00100000
Training batch loss at Epoch 43: 0.00847
Training Accuracy of 0.99917
Validation Accuracy of 0.96667

EPOCH 44 with forward-backward propagation time of 31.062s
Learning Rate of 0.00100000
Training batch loss at Epoch 44: 0.00000
Training Accuracy of 0.99936
Validation Accuracy of 0.95737

EPOCH 45 with forward-backward propagation time of 31.001s
Learning Rate of 0.00100000
Training batch loss at Epoch 45: 0.00063
Training Accuracy of 0.99952
Validation Accuracy of 0.95420

EPOCH 46 with forward-backward propagation time of 30.955s
Learning Rate of 0.00100000
Training batch loss at Epoch 46: 0.06517
Training Accuracy of 0.99916
Validation Accuracy of 0.95533

EPOCH 47 with forward-backward propagation time of 30.989s
Learning Rate of 0.00100000
Training batch loss at Epoch 47: 0.00023
Training Accuracy of 0.99952
Validation Accuracy of 0.95850

EPOCH 48 with forward-backward propagation time of 30.989s
Learning Rate of 0.00100000
Training batch loss at Epoch 48: 0.06816
Training Accuracy of 0.99881
Validation Accuracy of 0.95442

EPOCH 49 with forward-backward propagation time of 31.004s
Learning Rate of 0.00100000
Training batch loss at Epoch 49: 0.09032
Training Accuracy of 0.99919
Validation Accuracy of 0.95782

EPOCH 50 with forward-backward propagation time of 30.998s
Learning Rate of 0.00100000
Training batch loss at Epoch 50: 0.03572
Training Accuracy of 0.99945
Validation Accuracy of 0.95805

EPOCH 51 with forward-backward propagation time of 30.878s
Learning Rate of 0.00100000
Training batch loss at Epoch 51: 0.00039
Training Accuracy of 0.99966
Validation Accuracy of 0.96259

EPOCH 52 with forward-backward propagation time of 30.92s
Learning Rate of 0.00100000
Training batch loss at Epoch 52: 0.00033
Training Accuracy of 0.99917
Validation Accuracy of 0.95714

EPOCH 53 with forward-backward propagation time of 31.097s
Learning Rate of 0.00100000
Training batch loss at Epoch 53: 0.00030
Training Accuracy of 0.99926
Validation Accuracy of 0.96463

EPOCH 54 with forward-backward propagation time of 30.962s
Learning Rate of 0.00100000
Training batch loss at Epoch 54: 0.01292
Training Accuracy of 0.99913
Validation Accuracy of 0.95737

EPOCH 55 with forward-backward propagation time of 30.923s
Learning Rate of 0.00100000
Training batch loss at Epoch 55: 0.01455
Training Accuracy of 0.99907
Validation Accuracy of 0.96485

EPOCH 56 with forward-backward propagation time of 31.045s
Learning Rate of 0.00100000
Training batch loss at Epoch 56: 0.01687
Training Accuracy of 0.99952
Validation Accuracy of 0.96440

EPOCH 57 with forward-backward propagation time of 30.962s
Learning Rate of 0.00100000
Training batch loss at Epoch 57: 0.00020
Training Accuracy of 0.99957
Validation Accuracy of 0.96372

EPOCH 58 with forward-backward propagation time of 30.957s
Learning Rate of 0.00100000
Training batch loss at Epoch 58: 0.00362
Training Accuracy of 0.99950
Validation Accuracy of 0.96190

EPOCH 59 with forward-backward propagation time of 31.056s
Learning Rate of 0.00100000
Training batch loss at Epoch 59: 0.06249
Training Accuracy of 0.99961
Validation Accuracy of 0.96190

EPOCH 60 with forward-backward propagation time of 31.067s
Learning Rate of 0.00100000
Training batch loss at Epoch 60: 0.01805
Training Accuracy of 0.99879
Validation Accuracy of 0.95533

EPOCH 61 with forward-backward propagation time of 31.034s
Learning Rate of 0.00100000
Training batch loss at Epoch 61: 0.00019
Training Accuracy of 0.99923
Validation Accuracy of 0.95646

EPOCH 62 with forward-backward propagation time of 31.067s
Learning Rate of 0.00100000
Training batch loss at Epoch 62: 0.00408
Training Accuracy of 0.99946
Validation Accuracy of 0.95964

EPOCH 63 with forward-backward propagation time of 31.02s
Learning Rate of 0.00100000
Training batch loss at Epoch 63: 0.00000
Training Accuracy of 0.99959
Validation Accuracy of 0.96304

EPOCH 64 with forward-backward propagation time of 30.992s
Learning Rate of 0.00100000
Training batch loss at Epoch 64: 0.00081
Training Accuracy of 0.99969
Validation Accuracy of 0.96553

EPOCH 65 with forward-backward propagation time of 30.988s
Learning Rate of 0.00100000
Training batch loss at Epoch 65: 0.19091
Training Accuracy of 0.99956
Validation Accuracy of 0.96531

EPOCH 66 with forward-backward propagation time of 31.08s
Learning Rate of 0.00100000
Training batch loss at Epoch 66: 0.00498
Training Accuracy of 0.99972
Validation Accuracy of 0.97007

EPOCH 67 with forward-backward propagation time of 30.952s
Learning Rate of 0.00100000
Training batch loss at Epoch 67: 0.00014
Training Accuracy of 0.99945
Validation Accuracy of 0.95760

EPOCH 68 with forward-backward propagation time of 31.015s
Learning Rate of 0.00100000
Training batch loss at Epoch 68: 0.00009
Training Accuracy of 0.99962
Validation Accuracy of 0.96939

EPOCH 69 with forward-backward propagation time of 30.954s
Learning Rate of 0.00100000
Training batch loss at Epoch 69: 0.06171
Training Accuracy of 0.99960
Validation Accuracy of 0.96304

EPOCH 70 with forward-backward propagation time of 30.981s
Learning Rate of 0.00100000
Training batch loss at Epoch 70: 0.00019
Training Accuracy of 0.99975
Validation Accuracy of 0.96667

EPOCH 71 with forward-backward propagation time of 30.94s
Learning Rate of 0.00100000
Training batch loss at Epoch 71: 0.00463
Training Accuracy of 0.99980
Validation Accuracy of 0.96576

EPOCH 72 with forward-backward propagation time of 30.985s
Learning Rate of 0.00100000
Training batch loss at Epoch 72: 0.00860
Training Accuracy of 0.99960
Validation Accuracy of 0.96372

EPOCH 73 with forward-backward propagation time of 30.918s
Learning Rate of 0.00100000
Training batch loss at Epoch 73: 0.00003
Training Accuracy of 0.99968
Validation Accuracy of 0.96576

EPOCH 74 with forward-backward propagation time of 30.955s
Learning Rate of 0.00100000
Training batch loss at Epoch 74: 0.00033
Training Accuracy of 0.99968
Validation Accuracy of 0.95374

EPOCH 75 with forward-backward propagation time of 30.936s
Learning Rate of 0.00100000
Training batch loss at Epoch 75: 0.13075
Training Accuracy of 0.99947
Validation Accuracy of 0.96009

Simplfied CNN architecture Train-Test time 2325.88s.
Simplified CNN model trained and saved.

Chosen Model Evaluation

In [29]:
plt.figure(figsize=(10,5))
plt.plot(Epochs,Training_accuracies,'green', label='Training accuracy')
plt.plot(Epochs,Validation_accuracies,'blue', label='Validation accuracy')
plt.title("Convolution Neural Network accuracies vs. Epoch")
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(loc='lower right')
plt.grid(True)
plt.show()
         
plt.figure(figsize=(10,5))
plt.plot(Epochs,Training_losses,'black', label='Training loss')
plt.title("Convolution Neural Network training losses vs. Epoch")
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend(loc='upper right')
plt.grid(True)
plt.show()
In [30]:
meta_path='./tf-sessions-data/simplecnn_m2_e75_lr100.meta'
data_path='./tf-sessions-data/simplecnn_m2_e75_lr100.data-00000-of-00001'
index_path='./tf-sessions-data/simplecnn_m2_e75_lr100.index'
save_path='./tf-sessions-data/simplecnn_m2_e75_lr100'
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    # saver=tf.train.import_meta_graph(meta_path)
    saver.restore(sess,save_path)
    test_accuracy=evaluate(X_test_shuffle,y_test_shuffle)
    
print("No further training or tuning of hyperparameters!")
print("Testing dataset has an accuracy of {} %".format(round(test_accuracy*100),2))
INFO:tensorflow:Restoring parameters from ./tf-sessions-data/simplecnn_m2_e75_lr100
No further training or tuning of hyperparameters!
Testing dataset has an accuracy of 95.0 %
In [31]:
# Confusion Matrix Test Prediction using Sklearn.metrics
print("Confusion matrix plot using Sklearn.metrics\n")
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    saver.restore(sess,save_path)
    y_test_prediction=sess.run(tf.argmax(logits_cnn,1), feed_dict={X: X_test_shuffle, keep_prob: 1.0})    

confusion_matrix_test=confusion_matrix(y_true=y_test_shuffle,y_pred=y_test_prediction)
fig,ax=plt.subplots()
heatmap=ax.pcolor(confusion_matrix_test, cmap=plt.cm.Blues,alpha=0.6)
fig=plt.gcf()
fig.set_size_inches(12,12)
ax.set_frame_on(False)
ax.invert_yaxis()
ax.xaxis.tick_top()

ticks=np.arange(n_classes)

plt.xticks(ticks,range(n_classes),rotation=90)
plt.yticks(ticks,range(n_classes))

ax.grid(False)

ax = plt.gca()
plt.xlabel('Predicted Labels')
plt.ylabel('True Labels')
plt.show()
Confusion matrix plot using Sklearn.metrics

INFO:tensorflow:Restoring parameters from ./tf-sessions-data/simplecnn_m2_e75_lr100
In [32]:
print("Confusion matrix details using pandas\n")
print("Model accuracy of {} on test-image dataset".format(round(test_accuracy,2)))
cm_pandas=ConfusionMatrix_pandas(y_test_shuffle,y_test_prediction).print_stats()

accuracy=streaming_accuracy(predictions=y_test_prediction,labels=y_test_shuffle)

confusion_matrix_model=confusion_matrix_test
for i in range(n_classes):
    true_count=np.sum(y_test_shuffle==i)
    true_prediction=confusion_matrix_test[i,i]
    accuracy=100*true_prediction/true_count
    precision=100*true_prediction/np.sum(confusion_matrix_test[:,i])
    confusion_matrix_model[i,i]=0
    
    misclassified_index=np.argmax(confusion_matrix_model[i,:])

    print("CLASS {}: {}".format(i,arr_classes[i]))
    print("Accuracy: {} %".format(round(accuracy,5)))
    print("Precision: {} %".format(round(precision,5)))
    print("Class has been commonly confused/misclassified as class {}- '{}' with probability {} %"\
          .format(misclassified_index,arr_classes[misclassified_index],round(100*confusion_matrix_test[i,misclassified_index]/true_count,3)))
    print("\n")
Confusion matrix details using pandas

Model accuracy of 0.95 on test-image dataset
/home/carnd/anaconda3/envs/carnd-term1/lib/python3.5/site-packages/pandas_ml/confusion_matrix/bcm.py:346: RuntimeWarning: divide by zero encountered in double_scalars
  return(np.float64(self.LRP) / self.LRN)
/home/carnd/anaconda3/envs/carnd-term1/lib/python3.5/site-packages/pandas_ml/confusion_matrix/bcm.py:332: RuntimeWarning: divide by zero encountered in double_scalars
  return(np.float64(self.TPR) / self.FPR)
Confusion Matrix:

Predicted   0    1    2    3    4    5    6    7    8    9   ...      34   35  \
Actual                                                       ...                
0          59    1    0    0    0    0    0    0    0    0   ...       0    0   
1           1  704   13    0    0    0    0    1    0    0   ...       0    0   
2           0    5  735    1    0    2    0    0    1    0   ...       0    0   
3           0    0    0  434    0   10    0    0    0    0   ...       0    0   
4           1   28    4    0  605    3    0    1    4    2   ...       0    0   
5           2   11   10   18    0  578    1    3    1    0   ...       0    0   
6           0    0    0    6    0    4  137    0    0    0   ...       0    0   
7           0    1    2    2    0    4    0  390   44    2   ...       0    0   
8           1    2    2    2    0    6    0    3  430    2   ...       1    0   
9           0    0    0    0    0    0    0    0    0  480   ...       0    0   
10          0    1    0    0    0    2    0    0    0    3   ...       0    0   
11          0    0    0    0    0    0    0    0    0    0   ...       0    0   
12          0    0    0    0    0    0    0    0    0    0   ...       0    0   
13          0    0    0    0    0    0    0    0    0    0   ...       0    0   
14          0    0    0    0    0    0    0    0    0    0   ...       0    0   
15          0    0    3    0    0    0    0    0    0    0   ...       0    0   
16          0    0    0    0    0    0    0    0    0    0   ...       0    0   
17          0    0    0    0    0    0    0    0    0    0   ...       0    0   
18          0    2    0    0    1    0    0    0    0    0   ...       0    0   
19          0    0    0    0    0    0    0    0    0    0   ...       0    0   
20          0    0    0    0    0    0    0    0    0    0   ...       0    0   
21          0    0    0    0    0    0    0    0    0    0   ...       0    0   
22          0    0    0    0    0    0    0    0    0    0   ...       0    0   
23          0    0    0    0    0    0    0    0    0    0   ...       0    0   
24          0    1    0    0    0    0    0    0    0    0   ...       0    0   
25          0    1    0    0    0    0    0    0    0    0   ...       0    0   
26          0    0    0    0    0    0    0    0    0    0   ...       0    0   
27          0   24    0    0    0    0    0    0    0    0   ...       0    0   
28          0    0    0    0    0    0    0    0    0    0   ...       0    0   
29          0    0    0    0    0    0    0    0    0    0   ...       0    0   
30          0    0    1    0    0    0    0    0    0    0   ...       0    0   
31          0    0    0    0    0    0    0    0    0    0   ...       0    0   
32          0    0    0    0    0    0    0    0    0    0   ...       0    0   
33          0    0    0    0    0    0    0    0    0    0   ...       0    0   
34          0    0    0    0    0    0    0    0    0    0   ...     120    0   
35          0    0    2    0    0    0    0    0    0    0   ...       1  376   
36          0    3    1    0    0    0    0    0    0    0   ...       0    0   
37          0    0    0    0    0    0    0    0    0    0   ...       0    0   
38          0    7    3    0    0    2    0    1    2    1   ...       5    0   
39          0    0    0    0    0    0    0    0    0    0   ...       0    0   
40          0    1    0    0    0    0    0    0    0    0   ...       0    0   
41          0    0    0    0    0    0    0    0    0    1   ...       0    0   
42          0    0    0    0    0    0    0    0    0    0   ...       0    0   
__all__    64  792  776  463  606  611  138  399  482  491   ...     127  376   

Predicted   36  37   38   39   40  41  42  __all__  
Actual                                              
0            0   0    0    0    0   0   0       60  
1            0   0    0    0    0   0   0      720  
2            0   1    2    1    1   0   0      750  
3            0   0    0    0    0   0   0      450  
4            2   0    1    0    4   0   0      660  
5            0   0    2    0    2   0   0      630  
6            0   0    0    0    0   0   2      150  
7            0   0    0    0    4   0   0      450  
8            0   0    0    0    1   0   0      450  
9            0   0    0    0    0   0   0      480  
10           0   1    1    0    1   0   0      660  
11           0   0    0    0    0   0   0      420  
12           2   0    0    0    3   0   0      690  
13           1   0    0    0    1   0   0      720  
14           0   0    0    0    0   0   0      270  
15           0   0    1    0    0   0   0      210  
16           0   0    0    0    0   0   0      150  
17           3   0    1    0    8   0   0      360  
18           0   0    0    0    4   0   0      390  
19           0   0    0    0    0   0   0       60  
20           0   0    0    0    0   0   0       90  
21           2   0    0    0    0   0   0       90  
22           0   0    2    5    0   0   0      120  
23           0   0    0    0    0   0   0      150  
24           0   0    0    1    0   0   0       90  
25           0   0    0    1    0   0   0      480  
26           0   0    0    0    1   0   0      180  
27           0   0    0    0    0   0   0       60  
28           0   0    0    0    0   0   0      150  
29           0   0    0    0    0   0   0       90  
30           0   0    0    1    0   0   0      150  
31           0   0    0    0    0   0   0      270  
32           0   0    0    0    0   0   0       60  
33           0   0    0    0    0   0   0      210  
34           0   0    0    0    0   0   0      120  
35           0   4    4    0    0   0   0      390  
36         112   0    0    0    2   0   0      120  
37           0  59    0    1    0   0   0       60  
38           0  18  631    2    6   0   0      690  
39           0   0    1   88    0   0   0       90  
40           0   0    0    0   84   0   0       90  
41           0   0    0    0    0  59   0       60  
42           0   0    0    0    0   0  90       90  
__all__    122  83  646  100  122  59  92    12630  

[44 rows x 44 columns]


Overall Statistics:

Accuracy: 0.946714172605
95% CI: (0.94265337629610835, 0.95056752281246326)
No Information Rate: ToDo
P-Value [Acc > NIR]: 0.0
Kappa: 0.944647198738
Mcnemar's Test P-Value: ToDo


Class Statistics:

Classes                                         0           1           2   \
Population                                   12630       12630       12630   
P: Condition positive                           60         720         750   
N: Condition negative                        12570       11910       11880   
Test outcome positive                           64         792         776   
Test outcome negative                        12566       11838       11854   
TP: True Positive                               59         704         735   
TN: True Negative                            12565       11822       11839   
FP: False Positive                               5          88          41   
FN: False Negative                               1          16          15   
TPR: (Sensitivity, hit rate, recall)      0.983333    0.977778        0.98   
TNR=SPC: (Specificity)                    0.999602    0.992611    0.996549   
PPV: Pos Pred Value (Precision)           0.921875    0.888889    0.947165   
NPV: Neg Pred Value                        0.99992    0.998648    0.998735   
FPR: False-out                         0.000397772  0.00738875  0.00345118   
FDR: False Discovery Rate                 0.078125    0.111111   0.0528351   
FNR: Miss Rate                           0.0166667   0.0222222        0.02   
ACC: Accuracy                             0.999525    0.991766    0.995566   
F1 score                                  0.951613    0.931217    0.963303   
MCC: Matthews correlation coefficient     0.951875    0.928039    0.961102   
Informedness                              0.982936    0.970389    0.976549   
Markedness                                0.921795    0.887537      0.9459   
Prevalence                              0.00475059   0.0570071   0.0593824   
LR+: Positive likelihood ratio              2472.1     132.333     283.961   
LR-: Negative likelihood ratio           0.0166733   0.0223876   0.0200693   
DOR: Diagnostic odds ratio                  148267        5911       14149   
FOR: False omission rate               7.95798e-05  0.00135158   0.0012654   

Classes                                        3            4           5   \
Population                                  12630        12630       12630   
P: Condition positive                         450          660         630   
N: Condition negative                       12180        11970       12000   
Test outcome positive                         463          606         611   
Test outcome negative                       12167        12024       12019   
TP: True Positive                             434          605         578   
TN: True Negative                           12151        11969       11967   
FP: False Positive                             29            1          33   
FN: False Negative                             16           55          52   
TPR: (Sensitivity, hit rate, recall)     0.964444     0.916667     0.91746   
TNR=SPC: (Specificity)                   0.997619     0.999916     0.99725   
PPV: Pos Pred Value (Precision)          0.937365      0.99835     0.94599   
NPV: Neg Pred Value                      0.998685     0.995426    0.995674   
FPR: False-out                         0.00238095  8.35422e-05     0.00275   
FDR: False Discovery Rate                0.062635   0.00165017   0.0540098   
FNR: Miss Rate                          0.0355556    0.0833333   0.0825397   
ACC: Accuracy                            0.996437     0.995566     0.99327   
F1 score                                 0.950712     0.955766    0.931507   
MCC: Matthews correlation coefficient    0.948968     0.954399    0.928089   
Informedness                             0.962063     0.916583     0.91471   
Markedness                                0.93605     0.993776    0.941664   
Prevalence                              0.0356295    0.0522565   0.0498812   
LR+: Positive likelihood ratio            405.067      10972.5     333.622   
LR-: Negative likelihood ratio          0.0356404    0.0833403   0.0827673   
DOR: Diagnostic odds ratio                11365.4       131659     4030.84   
FOR: False omission rate               0.00131503   0.00457418  0.00432648   

Classes                                         6            7           8   \
Population                                   12630        12630       12630   
P: Condition positive                          150          450         450   
N: Condition negative                        12480        12180       12180   
Test outcome positive                          138          399         482   
Test outcome negative                        12492        12231       12148   
TP: True Positive                              137          390         430   
TN: True Negative                            12479        12171       12128   
FP: False Positive                               1            9          52   
FN: False Negative                              13           60          20   
TPR: (Sensitivity, hit rate, recall)      0.913333     0.866667    0.955556   
TNR=SPC: (Specificity)                     0.99992     0.999261    0.995731   
PPV: Pos Pred Value (Precision)           0.992754     0.977444    0.892116   
NPV: Neg Pred Value                       0.998959     0.995094    0.998354   
FPR: False-out                         8.01282e-05  0.000738916  0.00426929   
FDR: False Discovery Rate               0.00724638    0.0225564    0.107884   
FNR: Miss Rate                           0.0866667     0.133333   0.0444444   
ACC: Accuracy                             0.998892     0.994537    0.994299   
F1 score                                  0.951389     0.918728    0.922747   
MCC: Matthews correlation coefficient     0.951675     0.917686    0.920376   
Informedness                              0.913253     0.865928    0.951286   
Markedness                                0.991713     0.972538     0.89047   
Prevalence                               0.0118765    0.0356295   0.0356295   
LR+: Positive likelihood ratio             11398.4      1172.89     223.821   
LR-: Negative likelihood ratio           0.0866736     0.133432    0.044635   
DOR: Diagnostic odds ratio                  131509      8790.17     5014.46   
FOR: False omission rate                0.00104067   0.00490557  0.00164636   

Classes                                        9      ...               33  \
Population                                  12630     ...            12630   
P: Condition positive                         480     ...              210   
N: Condition negative                       12150     ...            12420   
Test outcome positive                         491     ...              211   
Test outcome negative                       12139     ...            12419   
TP: True Positive                             480     ...              209   
TN: True Negative                           12139     ...            12418   
FP: False Positive                             11     ...                2   
FN: False Negative                              0     ...                1   
TPR: (Sensitivity, hit rate, recall)            1     ...         0.995238   
TNR=SPC: (Specificity)                   0.999095     ...         0.999839   
PPV: Pos Pred Value (Precision)          0.977597     ...         0.990521   
NPV: Neg Pred Value                             1     ...         0.999919   
FPR: False-out                         0.00090535     ...      0.000161031   
FDR: False Discovery Rate               0.0224033     ...       0.00947867   
FNR: Miss Rate                                  0     ...        0.0047619   
ACC: Accuracy                            0.999129     ...         0.999762   
F1 score                                 0.988671     ...         0.992874   
MCC: Matthews correlation coefficient    0.988287     ...         0.992756   
Informedness                             0.999095     ...         0.995077   
Markedness                               0.977597     ...         0.990441   
Prevalence                              0.0380048     ...        0.0166271   
LR+: Positive likelihood ratio            1104.55     ...          6180.43   
LR-: Negative likelihood ratio                  0     ...       0.00476267   
DOR: Diagnostic odds ratio                    inf     ...      1.29768e+06   
FOR: False omission rate                        0     ...      8.05218e-05   

Classes                                         34          35           36  \
Population                                   12630       12630        12630   
P: Condition positive                          120         390          120   
N: Condition negative                        12510       12240        12510   
Test outcome positive                          127         376          122   
Test outcome negative                        12503       12254        12508   
TP: True Positive                              120         376          112   
TN: True Negative                            12503       12240        12500   
FP: False Positive                               7           0           10   
FN: False Negative                               0          14            8   
TPR: (Sensitivity, hit rate, recall)             1    0.964103     0.933333   
TNR=SPC: (Specificity)                     0.99944           1     0.999201   
PPV: Pos Pred Value (Precision)           0.944882           1     0.918033   
NPV: Neg Pred Value                              1    0.998858      0.99936   
FPR: False-out                         0.000559552           0  0.000799361   
FDR: False Discovery Rate                0.0551181           0    0.0819672   
FNR: Miss Rate                                   0   0.0358974    0.0666667   
ACC: Accuracy                             0.999446    0.998892     0.998575   
F1 score                                   0.97166    0.981723      0.92562   
MCC: Matthews correlation coefficient     0.971778    0.981326     0.924933   
Informedness                               0.99944    0.964103     0.932534   
Markedness                                0.944882    0.998858     0.917393   
Prevalence                              0.00950119   0.0308789   0.00950119   
LR+: Positive likelihood ratio             1787.14         inf       1167.6   
LR-: Negative likelihood ratio                   0   0.0358974      0.06672   
DOR: Diagnostic odds ratio                     inf         inf        17500   
FOR: False omission rate                         0  0.00114248  0.000639591   

Classes                                         37          38           39  \
Population                                   12630       12630        12630   
P: Condition positive                           60         690           90   
N: Condition negative                        12570       11940        12540   
Test outcome positive                           83         646          100   
Test outcome negative                        12547       11984        12530   
TP: True Positive                               59         631           88   
TN: True Negative                            12546       11925        12528   
FP: False Positive                              24          15           12   
FN: False Negative                               1          59            2   
TPR: (Sensitivity, hit rate, recall)      0.983333    0.914493     0.977778   
TNR=SPC: (Specificity)                    0.998091    0.998744     0.999043   
PPV: Pos Pred Value (Precision)           0.710843     0.97678         0.88   
NPV: Neg Pred Value                        0.99992    0.995077      0.99984   
FPR: False-out                          0.00190931  0.00125628  0.000956938   
FDR: False Discovery Rate                 0.289157   0.0232198         0.12   
FNR: Miss Rate                           0.0166667   0.0855072    0.0222222   
ACC: Accuracy                             0.998021    0.994141     0.998892   
F1 score                                  0.825175    0.944611     0.926316   
MCC: Matthews correlation coefficient     0.835201    0.942091     0.927063   
Informedness                              0.981424    0.913236     0.976821   
Markedness                                0.710764    0.971857      0.87984   
Prevalence                              0.00475059   0.0546318   0.00712589   
LR+: Positive likelihood ratio             515.021     727.936      1021.78   
LR-: Negative likelihood ratio           0.0166985   0.0856148    0.0222435   
DOR: Diagnostic odds ratio                 30842.2     8502.46        45936   
FOR: False omission rate               7.97003e-05  0.00492323  0.000159617   

Classes                                         40           41          42  
Population                                   12630        12630       12630  
P: Condition positive                           90           60          90  
N: Condition negative                        12540        12570       12540  
Test outcome positive                          122           59          92  
Test outcome negative                        12508        12571       12538  
TP: True Positive                               84           59          90  
TN: True Negative                            12502        12570       12538  
FP: False Positive                              38            0           2  
FN: False Negative                               6            1           0  
TPR: (Sensitivity, hit rate, recall)      0.933333     0.983333           1  
TNR=SPC: (Specificity)                     0.99697            1    0.999841  
PPV: Pos Pred Value (Precision)           0.688525            1    0.978261  
NPV: Neg Pred Value                        0.99952      0.99992           1  
FPR: False-out                           0.0030303            0  0.00015949  
FDR: False Discovery Rate                 0.311475            0   0.0217391  
FNR: Miss Rate                           0.0666667    0.0166667           0  
ACC: Accuracy                             0.996516     0.999921    0.999842  
F1 score                                  0.792453     0.991597    0.989011  
MCC: Matthews correlation coefficient     0.800056     0.991592    0.988992  
Informedness                              0.930303     0.983333    0.999841  
Markedness                                0.688045      0.99992    0.978261  
Prevalence                              0.00712589   0.00475059  0.00712589  
LR+: Positive likelihood ratio                 308          inf        6270  
LR-: Negative likelihood ratio           0.0668693    0.0166667           0  
DOR: Diagnostic odds ratio                    4606          inf         inf  
FOR: False omission rate               0.000479693  7.95482e-05           0  

[26 rows x 43 columns]
CLASS 0: Speed limit (20km/h)
Accuracy: 98.33333 %
Precision: 92.1875 %
Class has been commonly confused/misclassified as class 1- 'Speed limit (30km/h)' with probability 1.667 %


CLASS 1: Speed limit (30km/h)
Accuracy: 97.77778 %
Precision: 88.88889 %
Class has been commonly confused/misclassified as class 2- 'Speed limit (50km/h)' with probability 1.806 %


CLASS 2: Speed limit (50km/h)
Accuracy: 98.0 %
Precision: 94.71649 %
Class has been commonly confused/misclassified as class 1- 'Speed limit (30km/h)' with probability 0.667 %


CLASS 3: Speed limit (60km/h)
Accuracy: 96.44444 %
Precision: 93.7365 %
Class has been commonly confused/misclassified as class 5- 'Speed limit (80km/h)' with probability 2.222 %


CLASS 4: Speed limit (70km/h)
Accuracy: 91.66667 %
Precision: 99.83498 %
Class has been commonly confused/misclassified as class 1- 'Speed limit (30km/h)' with probability 4.242 %


CLASS 5: Speed limit (80km/h)
Accuracy: 91.74603 %
Precision: 94.59902 %
Class has been commonly confused/misclassified as class 3- 'Speed limit (60km/h)' with probability 2.857 %


CLASS 6: End of speed limit (80km/h)
Accuracy: 91.33333 %
Precision: 99.27536 %
Class has been commonly confused/misclassified as class 3- 'Speed limit (60km/h)' with probability 4.0 %


CLASS 7: Speed limit (100km/h)
Accuracy: 86.66667 %
Precision: 97.74436 %
Class has been commonly confused/misclassified as class 8- 'Speed limit (120km/h)' with probability 9.778 %


CLASS 8: Speed limit (120km/h)
Accuracy: 95.55556 %
Precision: 89.21162 %
Class has been commonly confused/misclassified as class 5- 'Speed limit (80km/h)' with probability 1.333 %


CLASS 9: No passing
Accuracy: 100.0 %
Precision: 97.75967 %
Class has been commonly confused/misclassified as class 0- 'Speed limit (20km/h)' with probability 0.0 %


CLASS 10: No passing for vehicles over 3.5 metric tons
Accuracy: 97.72727 %
Precision: 99.69088 %
Class has been commonly confused/misclassified as class 30- 'Beware of ice/snow' with probability 0.606 %


CLASS 11: Right-of-way at the next intersection
Accuracy: 90.2381 %
Precision: 98.44156 %
Class has been commonly confused/misclassified as class 30- 'Beware of ice/snow' with probability 8.095 %


CLASS 12: Priority road
Accuracy: 98.98551 %
Precision: 96.87943 %
Class has been commonly confused/misclassified as class 40- 'Roundabout mandatory' with probability 0.435 %


CLASS 13: Yield
Accuracy: 99.58333 %
Precision: 99.72184 %
Class has been commonly confused/misclassified as class 23- 'Slippery road' with probability 0.139 %


CLASS 14: Stop
Accuracy: 100.0 %
Precision: 96.08541 %
Class has been commonly confused/misclassified as class 0- 'Speed limit (20km/h)' with probability 0.0 %


CLASS 15: No vehicles
Accuracy: 98.09524 %
Precision: 96.71362 %
Class has been commonly confused/misclassified as class 2- 'Speed limit (50km/h)' with probability 1.429 %


CLASS 16: Vehicles over 3.5 metric tons prohibited
Accuracy: 100.0 %
Precision: 100.0 %
Class has been commonly confused/misclassified as class 0- 'Speed limit (20km/h)' with probability 0.0 %


CLASS 17: No entry
Accuracy: 93.05556 %
Precision: 100.0 %
Class has been commonly confused/misclassified as class 14- 'Stop' with probability 2.222 %


CLASS 18: General caution
Accuracy: 79.74359 %
Precision: 97.49216 %
Class has been commonly confused/misclassified as class 27- 'Pedestrians' with probability 4.872 %


CLASS 19: Dangerous curve to the left
Accuracy: 90.0 %
Precision: 94.73684 %
Class has been commonly confused/misclassified as class 23- 'Slippery road' with probability 10.0 %


CLASS 20: Dangerous curve to the right
Accuracy: 100.0 %
Precision: 93.75 %
Class has been commonly confused/misclassified as class 0- 'Speed limit (20km/h)' with probability 0.0 %


CLASS 21: Double curve
Accuracy: 65.55556 %
Precision: 74.68354 %
Class has been commonly confused/misclassified as class 29- 'Bicycles crossing' with probability 18.889 %


CLASS 22: Bumpy road
Accuracy: 90.83333 %
Precision: 100.0 %
Class has been commonly confused/misclassified as class 39- 'Keep left' with probability 4.167 %


CLASS 23: Slippery road
Accuracy: 98.66667 %
Precision: 75.89744 %
Class has been commonly confused/misclassified as class 21- 'Double curve' with probability 0.667 %


CLASS 24: Road narrows on the right
Accuracy: 95.55556 %
Precision: 93.47826 %
Class has been commonly confused/misclassified as class 1- 'Speed limit (30km/h)' with probability 1.111 %


CLASS 25: Road work
Accuracy: 94.375 %
Precision: 95.36842 %
Class has been commonly confused/misclassified as class 30- 'Beware of ice/snow' with probability 2.917 %


CLASS 26: Traffic signals
Accuracy: 94.44444 %
Precision: 92.3913 %
Class has been commonly confused/misclassified as class 18- 'General caution' with probability 3.333 %


CLASS 27: Pedestrians
Accuracy: 50.0 %
Precision: 56.60377 %
Class has been commonly confused/misclassified as class 1- 'Speed limit (30km/h)' with probability 40.0 %


CLASS 28: Children crossing
Accuracy: 99.33333 %
Precision: 98.02632 %
Class has been commonly confused/misclassified as class 29- 'Bicycles crossing' with probability 0.667 %


CLASS 29: Bicycles crossing
Accuracy: 98.88889 %
Precision: 80.18018 %
Class has been commonly confused/misclassified as class 24- 'Road narrows on the right' with probability 1.111 %


CLASS 30: Beware of ice/snow
Accuracy: 79.33333 %
Precision: 65.02732 %
Class has been commonly confused/misclassified as class 23- 'Slippery road' with probability 14.0 %


CLASS 31: Wild animals crossing
Accuracy: 98.14815 %
Precision: 99.25094 %
Class has been commonly confused/misclassified as class 23- 'Slippery road' with probability 0.741 %


CLASS 32: End of all speed and passing limits
Accuracy: 100.0 %
Precision: 95.2381 %
Class has been commonly confused/misclassified as class 0- 'Speed limit (20km/h)' with probability 0.0 %


CLASS 33: Turn right ahead
Accuracy: 99.52381 %
Precision: 99.05213 %
Class has been commonly confused/misclassified as class 25- 'Road work' with probability 0.476 %


CLASS 34: Turn left ahead
Accuracy: 100.0 %
Precision: 94.48819 %
Class has been commonly confused/misclassified as class 0- 'Speed limit (20km/h)' with probability 0.0 %


CLASS 35: Ahead only
Accuracy: 96.41026 %
Precision: 100.0 %
Class has been commonly confused/misclassified as class 37- 'Go straight or left' with probability 1.026 %


CLASS 36: Go straight or right
Accuracy: 93.33333 %
Precision: 91.80328 %
Class has been commonly confused/misclassified as class 1- 'Speed limit (30km/h)' with probability 2.5 %


CLASS 37: Go straight or left
Accuracy: 98.33333 %
Precision: 71.08434 %
Class has been commonly confused/misclassified as class 39- 'Keep left' with probability 1.667 %


CLASS 38: Keep right
Accuracy: 91.44928 %
Precision: 97.67802 %
Class has been commonly confused/misclassified as class 37- 'Go straight or left' with probability 2.609 %


CLASS 39: Keep left
Accuracy: 97.77778 %
Precision: 88.0 %
Class has been commonly confused/misclassified as class 33- 'Turn right ahead' with probability 1.111 %


CLASS 40: Roundabout mandatory
Accuracy: 93.33333 %
Precision: 68.85246 %
Class has been commonly confused/misclassified as class 12- 'Priority road' with probability 2.222 %


CLASS 41: End of no passing
Accuracy: 98.33333 %
Precision: 100.0 %
Class has been commonly confused/misclassified as class 9- 'No passing' with probability 1.667 %


CLASS 42: End of no passing by vehicles over 3.5 metric tons
Accuracy: 100.0 %
Precision: 97.82609 %
Class has been commonly confused/misclassified as class 0- 'Speed limit (20km/h)' with probability 0.0 %


In [33]:
print("Classes 9, 14, 16, 20, 32, 34, 42 have 100 % accuracy and aren't misclassified at all!")
Classes 9, 14, 16, 20, 32, 34, 42 have 100 % accuracy and aren't misclassified at all!

Step 4: Test a Model on New Images

To give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.

You may find signnames.csv useful as it contains mappings from the class id (integer) to the actual sign name.

Load and Output the Images

In [34]:
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
print("Testing the CNN-model on new images obtained from the Belgian dataset- load & save data. ")
verification_folder='traffic-signs-data/Belgian-traffic-signs-data/' 
save_folder='traffic-signs-data/Unseen-images/Belgian/'
valid_choice=[]
chosen_image_Belgian=[]
n_redo=0

if len(os.listdir(save_folder))>1 and n_redo==0:
    items=os.listdir(save_folder)
    images_list=np.zeros((10,32,32,3),dtype=np.uint8)
    
    for name in items:
        if name.endswith(".ppm"):
            valid_choice.append(name)
    for i, item in enumerate(valid_choice):
        image=cv2.resize(cv2.imread(os.path.join(save_folder,item)),(32,32))
        images_list[i]=image
    # Plot, Subplots
    plt.figure(figsize=(15,15))
    for num_images in range(10):
        plt.subplot(10,1,num_images+1)
        plt.tight_layout()
        plt.imshow(images_list[num_images])
        plt.axis('off')
    plt.show()    
    
else:
        
    folder_sample=random.sample(os.listdir(verification_folder),10)
    images_path=[]
    for folder in folder_sample:

        path=os.path.join(verification_folder,folder)
        items=os.listdir(path)

        for name in items:
            if name.endswith(".ppm"):
                valid_choice.append(name)
        image_sample=random.sample(valid_choice,1)

        for i in image_sample:
            image_path=os.path.join(path,i)
            images_path.append(image_path)

    # Splitting functions for readability
    images_list=np.zeros((10,32,32,3),dtype=np.uint8)
    for i,image_index in enumerate(images_path):
        image=cv2.imread(image_index)
        images_list[i]=cv2.resize(image,(32,32))
        chosen_image_Belgian.append(image)

        save_name="Belgian-"+str(i)+".ppm"
        cv2.imwrite(os.path.join(save_folder,save_name),image)

    # Plot, Subplots
    plt.figure(figsize=(12,10))
    for num_images in range(10):
        plt.subplot(10,1,num_images+1)
        plt.tight_layout()
        plt.imshow(images_list[num_images])
        plt.axis('off')
    plt.show()
Testing the CNN-model on new images obtained from the Belgian dataset- load & save data. 
In [35]:
print("Testing the CNN-model on new images obtained from Google searches- load & save data.")
save_folder_net='traffic-signs-data/Unseen-images/Net/'
valid_choice=[]

if len(os.listdir(save_folder_net))>1:
    items=os.listdir(save_folder_net)
    images_list_searched=np.zeros((24,32,32,3),dtype=np.uint8)
    for name in items:
        if name.endswith(".jpg"):
            valid_choice.append(name)
        
    for i, item in enumerate(valid_choice):
        image=cv2.resize(cv2.imread(os.path.join(save_folder_net,item)),(32,32))
        images_list_searched[i]=image
    # Plot, Subplots
    plt.figure(figsize=(12,10))
    for num_images in range(5):
        plt.subplot(5,1,num_images+1)
        plt.tight_layout()
        plt.imshow(images_list_searched[num_images])
        plt.axis('off')
    plt.show()    
Testing the CNN-model on new images obtained from Google searches- load & save data.

Predict the Sign Type for Each Image

In [36]:
def Model_evaluate(new_images,sess):
    prediction=sess.run(tf.argmax(logits_cnn,1), feed_dict={X: new_images, keep_prob: 1.0})
    top_k=tf.nn.top_k(tf.nn.softmax(logits_cnn),5, sorted=True)
    top_k_pred=sess.run(top_k, feed_dict={X: new_images, keep_prob:1.0})
    return prediction, top_k_pred

Using Belgian dataset images

In [37]:
## Preprocessing new images
Belgian_processed_images=[]

for image in images_list:
    Belgian_image_grayscale=(grayscale(image)).reshape(32,32,1)
    Belgian_processed_images.append(normalize(Belgian_image_grayscale))
print("Preprocessing Test images complete")
Preprocessing Test images complete
In [38]:
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    saver.restore(sess,save_path)

    predicted_label_Belgian, top_k_Belgian=Model_evaluate(np.asarray(Belgian_processed_images),sess) 
INFO:tensorflow:Restoring parameters from ./tf-sessions-data/simplecnn_m2_e75_lr100
Analyze Performance & Output Softmax Probabilities- Belgian dataset
In [39]:
plt.figure(figsize=(20,40))
for i in range(len(Belgian_processed_images)):
    plt.subplot(len(Belgian_processed_images),2,2*i+1)
    plt.imshow(images_list[i])
    plt.axis('off')
    plt.title("Image "+str(i)+" predicted: "+arr_classes[predicted_label_Belgian[i]])
    
    plt.subplot(len(Belgian_processed_images),2,2*i+2)
    plt.barh(np.arange(1,6,1),top_k_Belgian.values[i,:])
    plt.yticks(np.arange(1,6,1),(arr_classes[ind] for ind in top_k_Belgian.indices[i]))
    
    plt.xlabel('Probability')
    plt.ylabel('Top K-predictions for image')
plt.show()    

Using images found on the net

In [40]:
## Preprocessing new images
Net_processed_images=[]

for image in images_list_searched:
    Net_image_grayscale=(grayscale(image)).reshape(32,32,1)
    Net_processed_images.append(normalize(Net_image_grayscale))
print("Preprocessing Test images complete")
Preprocessing Test images complete
/home/carnd/anaconda3/envs/carnd-term1/lib/python3.5/site-packages/ipykernel/__main__.py:12: RuntimeWarning: invalid value encountered in true_divide
In [41]:
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    saver.restore(sess,save_path)

    predicted_label_Net, top_k_Net=Model_evaluate(np.asarray(Net_processed_images),sess) 
INFO:tensorflow:Restoring parameters from ./tf-sessions-data/simplecnn_m2_e75_lr100
Analyze Performance & Output Softmax Probabilities
In [42]:
plt.figure(figsize=(30,70))
for i in range(len(Net_processed_images)):
    plt.subplot(len(Net_processed_images),2,2*i+1)
    plt.imshow(images_list_searched[i])
    plt.axis('off')
    plt.title("Image "+str(i)+" predicted: "+arr_classes[predicted_label_Net[i]])
    
    plt.subplot(len(Net_processed_images),2,2*i+2)
    plt.barh(np.arange(1,6,1),top_k_Net.values[i,:])
    plt.yticks(np.arange(1,6,1),(arr_classes[ind] for ind in top_k_Net.indices[i]))
    
    plt.xlabel('Probability')
    plt.ylabel('Top K-predictions for image')
plt.show()    

Step 5 (Optional): Visualize the Neural Network's State with Test Images

This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol.

Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the LeNet lab's feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable.

For an example of what feature map outputs look like, check out NVIDIA's results in their paper End-to-End Deep Learning for Self-Driving Cars in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image.

Combined Image

Your output should look something like this (above)

In [43]:
### Visualize your network's feature maps here.
### Feel free to use as many code cells as needed.

# image_input: the test image being fed into the network to produce the feature maps
# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer
# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output
# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry

def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):
    # Here make sure to preprocess your image_input in a way your network expects
    # with size, normalization, ect if needed
    # image_input =
    # Note: x should be the same name as your network's tensorflow data placeholder variable
    # If you get an error tf_activation is not defined it may be having trouble accessing the variable from inside a function
    activation = tf_activation.eval(session=sess,feed_dict={X : image_input})
    featuremaps = activation.shape[3]
    plt.figure(plt_num, figsize=(15,15))
    for featuremap in range(featuremaps):
        plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column
        plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number
        if activation_min != -1 & activation_max != -1:
            plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray")
        elif activation_max != -1:
            plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray")
        elif activation_min !=-1:
            plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray")
        else:
            plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray")
In [44]:
# Another method to visualize activations
def Plot_activations(activations):
    filters=activations.shape[3]
    plt.figure(1,figsize=(12,12))
    cols=5
    rows=math.ceil(filters/cols)+1
    for i in range(filters):
        plt.subplot(rows,cols,i+1)
        plt.title('Filter'+str(i))
        plt.imshow(activations[0,:,:,i], interpolation='nearest', cmap='gray')
    
def Obtain_activations(layer,sample_image):
    activations=sess.run(layer,feed_dict={X:np.reshape(sample_image,[1,32,32,1]), keep_prob:1.0})
    Plot_activations(activations)
In [45]:
# Selecting a random image from the validation dataset.
rand_index=random.randint(0,len(X_valid_shuffle))
selected_image=X_valid_shuffle[rand_index].reshape(1,32,32,1)

plt.title('Image chosen from Test dataset, Index {}'.format(rand_index))
plt.imshow(selected_image.reshape(32,32), cmap='gray')
plt.show()
In [46]:
# Listing paths in the graph
save_path='./tf-sessions-data/simplecnn_m2_e75_lr100'

activation_1="EntropyCost/Convolution_Layer_1/activation_1:0"
activation_2="EntropyCost/Convolution_Layer_2/activation_2:0"
activation_3="EntropyCost/Convolution_Layer_3/activation_3:0"
activation_4="EntropyCost/Dense_Layer_1/activation_4:0"
activation_5="EntropyCost/Dense_Layer_2/activation_5:0"
activation_6="EntropyCost/Dense_Layer_3/activation_6:0"
In [47]:
print("Layer One Activation")
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    saver.restore(sess,save_path)
       
    act1=tf.get_default_graph().get_tensor_by_name(activation_1)
    outputFeatureMap(selected_image,act1)
Layer One Activation
INFO:tensorflow:Restoring parameters from ./tf-sessions-data/simplecnn_m2_e75_lr100
In [48]:
print("Obtain Activations Layer One")
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    
    saver.restore(sess,save_path)
    Obtain_activations(activation_1,selected_image)
Obtain Activations Layer One
INFO:tensorflow:Restoring parameters from ./tf-sessions-data/simplecnn_m2_e75_lr100
In [49]:
print("Obtain Activations Layer Two")
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    
    saver.restore(sess,save_path)
    Obtain_activations(activation_2,selected_image)
Obtain Activations Layer Two
INFO:tensorflow:Restoring parameters from ./tf-sessions-data/simplecnn_m2_e75_lr100
In [50]:
print("Obtain Activations Layer Three")
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    
    saver.restore(sess,save_path)
    Obtain_activations(activation_3,selected_image)
Obtain Activations Layer Three
INFO:tensorflow:Restoring parameters from ./tf-sessions-data/simplecnn_m2_e75_lr100
print("Obtain Activations Layer Four") with tf.Session() as sess: sess.run(tf.global_variables_initializer()) saver.restore(sess,save_path) Obtain_activations(activation_4,selected_image)print("Obtain Activations Layer Five") with tf.Session() as sess: sess.run(tf.global_variables_initializer()) saver.restore(sess,save_path) Obtain_activations(activation_5,selected_image)print("Obtain Activations Layer Six") with tf.Session() as sess: sess.run(tf.global_variables_initializer()) saver.restore(sess,save_path) Obtain_activations(activation_6,selected_image)
In [54]:
sess.close()
In [55]:
print("Script terminated at",str(datetime.now()))
Script terminated at 2017-08-24 06:57:44.778740
In [ ]: